text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
PD-L1-targeted microbubbles loaded with docetaxel produce a synergistic effect for the treatment of lung cancer under ultrasound irradiation : Immunotherapy is gradually becoming as important as traditional therapy in the treatment of cancer, but adverse drug reactions limit patient benefits from PD1/PD-L1 checkpoint inhibitor drugs in the treatment of non-small cell lung cancer (NSCLC). As a chemotherapeutic drug for NSCLC, docetaxel (DTX) can synergize with PD1/PD-L1 checkpoint inhibitors but increase haematoxicity and neurotoxicity. Herein, anti-PD-L1 monoclonal antibody (mAb)-conjugated and docetaxel-loaded multifunctional lipid-shelled microbubbles (PDMs), which were designed with biological safe phospholipid to produce synergistic antitumour effects, reduced the incidence of side effects and promoted therapeutic effects under ultrasound (US) irradiation. The PDMs were prepared by the acoustic-vibration method and then conjugated with an anti-PD-L1 mAb. The material features of the microbubbles, cytotoxic effects, cellular apoptosis and cell cycle inhibition were studied. A subcutaneous tumour model was established to test the drug concentration-dependent and antitumour effects of the PDMs combined with US irradiation, and an orthotopic lung tumour model simultaneously verified the antitumour effect of this synergistic treatment. The PDMs achieved higher cellular uptake than free DTX, especially when combined with US irradiation. The PDMs combined with US irradiation also induced an increased rate of cellular apoptosis and an elevated G2-M arrest rate in cancer cells, which was positively correlated with PD-L1 expression. An in vivo study showed that synergistic treatment had relatively strong effects on tumour growth inhibition, increased survival time and decreased adverse effect rates. Our study possibly provides a well-controlled design for immunotherapy 3 and chemotherapy and has promising potential as a clinical application for NSCLC treatment. Abstract: Immunotherapy is gradually becoming as important as traditional therapy in the treatment of cancer, but adverse drug reactions limit patient benefits from PD1/PD-L1 checkpoint inhibitor drugs in the treatment of non-small cell lung cancer (NSCLC). As a chemotherapeutic drug for NSCLC, docetaxel (DTX) can synergize with PD1/PD-L1 checkpoint inhibitors but increase haematoxicity and neurotoxicity. Herein, anti-PD-L1 monoclonal antibody (mAb)-conjugated and docetaxel-loaded multifunctional lipid-shelled microbubbles (PDMs), which were designed with biological safe phospholipid to produce synergistic antitumour effects, reduced the incidence of side effects and promoted therapeutic effects under ultrasound (US) irradiation. The PDMs were prepared by the acoustic-vibration method and then conjugated with an anti-PD-L1 mAb. The material features of the microbubbles, cytotoxic effects, cellular apoptosis and cell cycle inhibition were studied. A subcutaneous tumour model was established to test the drug concentration-dependent and antitumour effects of the PDMs combined with US irradiation, and an orthotopic lung tumour model simultaneously verified the antitumour effect of this synergistic treatment. The PDMs achieved higher cellular uptake than free DTX, especially when combined with US irradiation. The PDMs combined with US irradiation also induced an increased rate of cellular apoptosis and an elevated G2-M arrest rate in cancer cells, which was positively correlated with PD-L1 expression. An in vivo study showed that synergistic treatment had relatively strong effects on tumour growth inhibition, increased survival time and decreased adverse effect rates. Our study possibly provides a well-controlled design for immunotherapy 4 for ultrasound (US)-targeted delivery 15 . Microbubbles' smaller size allows extravasation from blood vessels into surrounding tissues, improving stability and giving longer residence times in the systemic circulation 16 . Therefore, lipid-shelled microbubbles might be an appropriate drug delivery system for the combination of a PD1/PD-L1 checkpoint inhibitor and docetaxel (DTX). As one of the most common noninvasive physical radiation sources, US plays an important role in clinical diagnosis and therapy. Lung US as an emerging theranostic modality exhibits non-ionizing properties, high local resolution, real-time imaging, and low cost 17 . Clinical trials have verified the efficacy and safety of pulmonary US irradiation treatment combined with corresponding drugs 18,19 . The cavitation and sonoporation effects are generally believed to contribute to the therapeutic effect of US irradiation, which can ensure specialized targeted delivery of proteins, genes, exosomes or traditional chemotherapeutic drugs [20][21][22] . Hence, we hypothesized that immunochemotherapeutic phospholipid microbubbles combined with US irradiation might enhance the efficacy and reduce the adverse effects of the combination of a PD1/PD-L1 inhibitor and DTX. To verify our hypothesis, we designed a multifunctional microbubble system in which the membrane was DTX loaded and then anti-PD-L1 monoclonal antibody (mAb)-modified (PDMs). The anti-PD-L1 mAb could block the immunosuppressive PD1/PD-L1 pathway, while the PDMs directly kill tumour cells via DTX. Moreover, US irradiation was used to rupture the PDMs to further increase drug concentrations in the tumour. The cavitation and sonoporation effects improve the ability of drugs to enter 0.964 mV after coupling to an anti-PD-L1 mAb. These increases were concluded to be the result of successful synthesis of the antibody. The dispersions of the microbubbles are shown in Fig. 2B. The change in size could be a sign of successful binding of the antibody to the surface of the microbubbles. The encapsulation and loading efficiency of DTX in the DTX-loaded microbubble (DM) were 59.21 ± 1.97% and 4.75 ± 0.65%, respectively, and in the PDMs were 57.34 ± 2.61% and 4.45 ± 0.91%, respectively (Table S2). The profiles of the release of DTX from the microbubbles with/without US irradiation were examined to assess the effects of sonication on DTX release. As shown in Fig. 2C, US irradiation significantly promoted drug release. CY5-labelled anti-PD-L1 mAb conjugation with FITC-labelled DTX microbubbles were observed by the laser scanning confocal microscopy (LSCM) (Fig. 2D). The microbubbles consistently maintained a round shape and were bound with a fluorescein-labelled antibody. The BMs, DMs, and PDMs had similar contrast imaging capabilities in vitro and in vivo (Fig. 2E); the peak intensity and time to peak were not significantly different among the microbubbles ( Fig. 2F and Fig. 2G). In addition to the expression on a number of cancer cell lineages, PD-L1 was also expressed on non-parenchymal cells and non-haematopoietic lineages 5 . The distribution of PD-L1 in other organs might prevent the enhancing effect of the microbubbles from being obvious over a short period of time. We also investigated whether BMs, DMs, and PDMs can induce haemolysis ( Fig. S1A and B), the result showed none of the formulations was haemolytic. The toxicity study showed that no weight loss was observed, and serum biochemical parameters were within the corresponding reference ranges (Fig. S2A-E). HE staining showed that the free combo group resulted in thickened alveolar walls and lymphocyte infiltration in normal lung tissue. Our findings illustrate the immunochemotherapeutic microbubbles are biologically safe and lung protective. Microbubble cytotoxicity in vitro The PD-L1 expressions of one type of mouse cell (LLC) and three types of human cells (NCI-H460, NCI-H1299, and A549) were tested by flow cytometry. As shown in Fig. 3A and B, the expression of LLC cells was similar to that of NCI-H460 cells, while NCI-H1299 cells had the highest expression and A549 cells had the lowest. Free DTX and DMs had similar cytotoxicity in the four tumour cell lines. PDMs had stronger efficacy because of the conjugated anti-PD-L1 mAb, and this efficacy was strengthened by US irradiation (Fig. 3C-F). The results demonstrated that anti-PD-L1 mAb targeting and US irradiation could both enhance the cytotoxicity of DTX to the target tumour tissues. In vitro enhancement of cellular drug uptake mediated by PD-L1 To further assess the relationship between the targeting efficiency of PDMs and affinity of DTX for LLC cells in vitro, the cellular uptake of DTX in different formulations was determined by fluorescence imaging and flow cytometry. C6 to indicate drug uptake was contained in microbubbles at a concentration equimolar to DTX. As shown in Fig. 3A and D, CLSM images indicated elevated intracellular drug uptake of C6-PDMs, especially with US irradiation. The flow cytometry results were also consistent with the 8 fluorescence imaging results (Fig. S4). Therefore, PD-L1-mediated internalization was more efficient than passive diffusion and nonspecific target. US irradiation could promote drug uptake, which might account for the increased cellular toxicity of PDMs when combined with US irradiation. Apoptosis induction and cell cycle inhibition in LLC cells in vitro The cell apoptosis study assessing different formulations utilized the Annexin V-FITC/PI method to further explore tumour killing. LLC cells were treated with US irradiation, free DTX, DMs, PDMs, or PDMs + US. After 24 h of incubation, the total apoptosis rate was analysed by flow cytometry. As shown in Fig. 3B and E, PDMs + US induced the highest apoptosis rate, free DTX and DMs produced similar apoptosis rates, and US irradiation monotherapy had no influence on apoptosis. For cell cycle analysis, the PI/RNase method was employed and the remaining steps were the similar. In agreement with the cytotoxicity, cellular uptake, and apoptosis studies, the cell cycle analysis showed that cell cycle inhibition increased in response to different formulations ( Fig. 3C and F). Hence, using the aforementioned data, it can be concluded that the PDMs combined with US irradiation increased drug uptake, inhibited the cell cycle, promoted apoptosis, and enhanced drug toxicity to the cells. Tumour resistance to Paclitaxel family agent has been a problem troubling clinicians for a long time. Until studies on transcription and post-transcriptional mechanisms revealed that Paclitaxel family agent induces the expression of PD-L1 immunosuppressive molecules through the mitogen-activated protein kinase (MAPK) pathway 23 . The evidence suggests patients may benefits from a synergy of docetaxel interest analysis (Fig. 5C) showed that PDMs combined with US irradiation displayed the highest fluorescence efficiency, which might portend enhanced therapeutic effects. As predicted, US irradiation promoted rapid drug release in tumours, and the active targeting by the anti-PD-L1 mAb on the surface of PDMs promoted drug accumulation. Comparing the distributions of different drugs in major metabolic organs showed that the distribution of free drug in the liver was distinctly lower than that of the microbubbles. It is deduced that increasing the time between injections might reduce liver injury, and this was confirmed in the biotoxicity experiment. Due to the side effects of immunochemotherapy, a variety of materials have been used as adjuncts [25][26][27][28] . The targeted drug delivery system of ultrasonic microbubbles, coupled with an antibody that targets a corresponding antigen overexpressed on tumour cells, provides a promising strategy for reducing the severe adverse effects associated with chemotherapeutic drugs 29,30 . The fabricated PDMs could effectively target the tumour tissue and thus reduce off-target toxicity. PD-L1 is highly expressed on the surface of various tumour cells and helps tumour cells evade antitumour immunity. Previous studies have indicated that anti-PD-L1 mAb-coupled drug delivery systems can target tumour cells 31 and activate the immune system 32 , and this approach can synergize with chemotherapy. Therefore, it can be concluded that PDMs combined with US irradiation exhibited an increased drug concentration and extended drug duration in tumours and were therefore beneficial for therapeutic effects of increased strength and duration. days, the mice were weighed, the tumour volumes were measured, and blood was obtained to assess liver and kidney functions. After all the treatments, the mice were sacrificed for subsequent histopathological study of the tumours and vital organs to evaluate efficacy and safety. Inhibited tumour growth in a subcutaneous tumour model after treatments with As shown in Fig. 6B-D, the PDMs combined with US irradiation group had the highest survival rate through the termination of experiment, and this group showed the best inhibition of tumour growth. The PDMs alone had a better therapeutic effect than the free-drug combination and ranked second. In addition, the synergistic therapy had more obvious efficacy than chemotherapy. The DMs produced slightly improved therapeutic effects on the tumour volume inhibition and survival rate. The difference in body weight was not obvious differences among the groups. Apoptosis and proliferation in tumours were analysed via the TUNEL assay and CD31 and Ki67 immunohistochemistry ( Fig.7 A-F). The data showed that the PDMs combined with US irradiation group had the highest apoptosis rate and lowest proliferation level. The trend in the data was in accordance with the tumour growth observations. The results for cleaved caspase-3, cleaved caspase-8, and cleaved caspase-9 in Western blot assays also revealed that the PDMs combined with US irradiation group had the highest apoptosis rate (Fig. 8A-D). It has been verified that US irradiation can enhance checkpoint inhibitor therapy 33 . US irradiation combined with immunotherapeutic nanomaterials has been studied in colorectal cancer 33 , B-cell lymphoma 34 , and breast cancer 35 . US irradiation combined with microbubble therapy for local lesions owns many advantages. The cavitation effect caused by microbubble rupturing under US irradiation can achieve high drug enrichment 18,36 . The sonoporation effect promotes drug uptake and enhances the delivery of small and large molecules 37,38 . Moreover, studies of US irradiationenhanced microbubble tumour treatments indicated that this approach could induce rapid vascular damage and shut down blood flow 36 . Our results demonstrated that PDMs combined with US irradiation can produce a strong antitumour effect. Immune activation and cytokine production alleviation by microbubbles To study the infiltration of immune cells into the tumour site after treatment, tumorinfiltrating lymphocytes (TILs) were harvested from tumours and analysed by immunofluorescence and flow cytometry on day 15 of the experimental process. The flow cytometry results also showed that CD8 + and CD4 + cell infiltration was approximately 4-fold greater in the PDMs combined with US irradiation group than in the control group ( Fig. 8E and F). Immunofluorescence staining revealed that the tumours from the PDMs combined with US irradiation group were remarkably infiltrated by both CD8 + and CD4 + T cells, while untreated tumours exhibited limited infiltration (Fig. 8G). We also observed that the level of TNF-α, which induces tumour cell apoptosis, of the mice. ℃ with/without US irradiation (2.0 W/cm 2 , 1 MHz, duty cycle 50% for 5 minutes). After 1 mL of dialysate water was removed from the container at predetermined intervals and stored at 4 ℃ for analysis, the same volume of PBS was used to replenish the sampled mixture. The release amount was measured by the same HPLC method described above. Three independent samples from each group were tested and analysed. Toxicities of different microbubble formulations All animal procedures were performed in accordance with the Guidelines for Care and atmosphere, the supernatant was removed, and the cells were rinsed carefully with PBS twice, followed by the addition of RPMI 1640 medium (100 μL) and CCK-8 (10 μL) for 1 h. The OD was detected at 450 nm using a microplate reader. Relative cell viability (RCV) (%) was calculated as RCV (%) = OD test /OD control × 100%. Evaluation of specific DTX cellular uptake mediated by PD-L1 C6 was added to microbubbles to analyse cellular drug uptake. LLC cells were seeded in six-well plates with the corresponding incubation medium at a density of 5 × The antitumour efficacies of the microbubbles in the orthotopic tumour models were evaluated by CT scan, body weight, and survival rate statistics. Statistical analysis All statistical analyses were performed using SPSS 21.0 software and GraphPad Prism 8 software. Data are expressed as mean ± standard deviation unless otherwise noted. An unpaired two-tailed t-test was used to compare between two groups. When comparing multiple groups, one-way ANOVA with the Newman-Keuls post hoc test was performed. Kaplan-Meier survival curves were analysed using the log-rank test with the Tukey post hoc test. Differences were considered statistically significant when P < 0.05. Statistical significance was noted as follows: # P < 0.05 and ## P < 0.01 compared with the control group, and * P < 0.05 and ** P < 0.01 between groups. Conclusion Here, we developed anti-PD-L1 mAb-conjugated and DTX-loaded multifunctional lipid-shelled microbubbles and verified that the anti-PD-L1 mAb and US irradiation could promote DTX uptake. We validated that a checkpoint inhibitor could be integrated into a therapeutic microbubble to produce a combination with synergistic effects. We also proved that US irradiation-mediated immunochemotherapeutic microbubble therapy could be used to treat lung cancer. This immunochemotherapeutic microbubble approach illustrates a successful treatment strategy that can be extended to other combinations based on clinically approved antibodies (e,g, anti-CTLA-4, anti-41-BB, or anti-TIM-3 antibodies) or traditional chemotherapeutic drugs (e.g. † These authors contributed equally to this work. Conflicts of interest The authors have declared that no competing interest exists. biologically independent mice, analysed using one-way ANOVA with the Newman-Keuls post hoc test. *P < 0.05 and **P < 0.01 among groups. the control group (normal saline); *P < 0.05 and **P < 0.01 between groups compared using a paired two-way Student's t-test. The log-rank test followed by Tukey's post hoc test was performed to determine statistical significance in D; n = 6 biologically independent animals. #P < 0.05 and ##P < 0.01 compared with the control group, *P < 0.05 and **P < 0.01 compared with the control group (normal saline). View Article Online saline); *P < 0.05 and **P < 0.01 among groups. biologically independent mice. #P < 0.05 and ##P < 0.01 compared with the control group (normal saline); *P < 0.05 and **P < 0.01 between groups using a paired twoway Student's t-test. The log-rank test was performed followed by Tukey's post hoc test to determine statistical significance in E; n = 6 biologically independent mice. #P < 0.05 and ##P < 0.01 compared with the control group (normal saline); *P < 0.05 and
4,005.4
2020-01-16T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Angelica gigas Nakai and Soluplus-Based Solid Formulations Prepared by Hot-Melting Extrusion: Oral Absorption Enhancing and Memory Ameliorating Effects Oral solid formulations based on Angelica gigas Nakai (AGN) and Soluplus were prepared by the hot-melting extrusion (HME) method. AGN was pulverized into coarse and ultrafine particles, and their particle size and morphology were investigated. Ultrafine AGN particles were used in the HME process with high shear to produce AGN-based formulations. In simulated gastrointestinal fluids (pH 1.2 and pH 6.8) and water, significantly higher amounts of the major active components of AGN, decursin (D) and decursinol angelate (DA), were extracted from the HME-processed AGN/Soluplus (F8) group than the AGN EtOH extract (ext) group (p < 0.05). Based on an in vivo pharmacokinetic study in rats, the relative oral bioavailability of decursinol (DOH), a hepatic metabolite of D and DA, in F8-administered mice was 8.75-fold higher than in AGN EtOH ext-treated group. In scopolamine-induced memory-impaired mice, F8 exhibited a more potent cognitive enhancing effect than AGN EtOH ext in both a Morris water maze test and a passive avoidance test. These findings suggest that HME-processed AGN/Soluplus formulation (F8) could be a promising therapeutic candidate for memory impairment. Introduction Angelica gigas (Dang-Gui) is a biennial or short lived perennial plant found in China, Japan, and Korea. The root of Angelica gigas has been used in oriental traditional medicine and is marketed as a functional food product in Europe and North America [1]. Cham-Dang-Gui (Korean Angelica, the dried root of Angelica gigas Nakai (AGN)) has been principally cultivated in Korea and used as a Korean medicinal herb. It contains several chemicals, such as pyranocoumarins, essential oils, and polyacetylenes [2]. The major active components of AGN are hydrophobic pyranocoumarins, including decursin (D), decursinol (DOH), and decursinol angelate (DA). They have analgesic, anticancer, anti-inflammatory, and neuroprotective effects [2][3][4][5]. Particularly, the active ingredients of AGN, as single components or extracts, have been reported to ameliorate memory impairment in animal studies [6,7]. Furthermore, the pharmacological efficacies and in vivo pharmacokinetic properties (including absorption and metabolism) after oral administration of the active components of AGN have been demonstrated [8,9]. Natural products can be processed by extraction, filtration, concentration, freeze drying, and their combination, but these methods are generally time-consuming, labor-intensive, and expensive. For the efficient extraction of pharmacologically active components, alcohols have generally been used as extraction solvents [10]. Besides alcohols, hot water can be used to extract bioactive components from AGN. However, that method seems to be inappropriate for extracting poorly water-soluble components from AGN [11]. Several major components of AGN, such as D and DA, are reported as poorly water-soluble, thus improving the aqueous solubility of those active components is necessary. Diverse approaches have been applied to enhance aqueous solubility and dissolution of poorly water-soluble components, including chemical modification, physical modification, and carrier systems [12][13][14]. Among them, the hot-melting extrusion (HME) technique has been used to prepare solid formulations (i.e. solid dispersion) of poorly water-soluble components [15]. It offers several advantages over the other methods: HME is fast, continuous, environmentally friendly, and economical [16][17][18]. Using the HME technique, the active compounds of natural products can be expected to disperse at a molecular level, thereby improving their aqueous solubility and/or dissolution rate and prolonging their storage life. Recently, the particle size of orally administered AGN powders has been reported to influence the treatment of estrogen-related symptoms of menopause [11]. Compared to coarse AGN powder, ultrafine AGN powder exhibited superior pharmacological efficacies in terms of serum ovarian and reproductive hormone levels and experimental osteoporosis parameters. Increasing the powder's surface area to volume ratio by ultrafine milling seems to increase the dissolution of active components and subsequently enhance the bioactivity of herbal medicines. This milling method can also be used for extracts, which are likely to lose their bioactivity during processing, storage, and oral intake [11]. Herein, two methods including physical modification (milling and HME) and chemical modification (the addition of Soluplus) were introduced to prepare oral solid formulations of AGN. Ultrafine milled AGN particles with Soluplus were processed by HME to increase the aqueous solubility of pharmacologically active components (D and DA) of AGN. Soluplus, a grafted copolymer composed of polyethylene glycol (PEG) 6000, vinylcaprolactam, and vinyl acetate has been reported as a suitable polymer for HME processing [19,20]. It is known to enhance the aqueous solubility, dissolution, and absorption of poorly water-soluble drug [21][22]. In this study we performed physicochemical characterization and extraction of developed AGN-based solid formulations and assessed their pharmacokinetic properties and pharmacological efficacy in vivo. Co. (St. Louis, MO, USA). All solvents used in this study were high performance liquid chromatography (HPLC) grade. All other chemicals were of analytical grade and used without further purification. Preparation and characterization of AGN formulations AGN was dried in the oven at 55°C for 24 h and cooled at room temperature. The AGN sample was then stored at 4°C until milling. Coarse and ultrafine powder formulations were acquired by the reported milling methods with slight modifications [11]. AGN samples were milled into coarse powder by a pin crusher (JIC-P10-2; Myungsung Machine, Seoul, Korea) equipped with a 30-mesh sieve. The milled powder was fractionated using a sieve shaker (CG-213, Ro-Top, Chunggye Industrial Mfg. Co., Seoul, Korea) equipped with a series of sieves (F 20 cm). The powder was passed through 300-μm mesh size sieves, and unpassed particles were grinded again with the pin crusher. Those powders were then stored at 25°C before ultrafine milling. The coarse powders were pulverized and classified by a low temperature turbo mill (HKP-05; Korea Energy Technology Co., Ltd., Seoul, Korea). The powders were pulverized after passing an impeller with high rotation speed, and the first, second, and third stator classifier system was used to classify particles using centrifugal and drag forces. For ultrafine powder processing, the rotor speed was set as 10,500 rpm, and the temperature of mill chamber was kept at -18°C. The ultrafine AGN powder obtained was stored in a desiccator before its use. Particle sizes of coarse and ultrafine AGN (F1 and F2) were measured by a particle size analyzer (Mastersizer 2000; Malvern Instruments Ltd., Worcestershire, UK), by using the laser diffraction technique. Particle size was measured at 25°C with a scattering angle of 90°. The average particle size indicates the mean value of 9 measurements for every sample. AGN-based oral formulations were prepared in this investigation according to the experimental conditions and composition ratios shown in Table 1. For solid formulations, ultrafine AGN powder and water (4:1, w/w) was extruded by different shears using an STS-25HS twinscrew extruder (Hankook E.M. Ltd., Pyoung-Taek, Korea) equipped with a round-shaped die (1 mm in diameter), according to the presented temperatures (Table 2). Three different (low, middle, and high) shear stresses were generated by the different screw arrangements. Each shear stress (low, middle, and high) was represented by measuring specific mechanical energy (SME) [23]. The feeding amount of sample was 28 g. Water (20% of total input weight) was added at 1.0 mL/min speed to the extruder. The speed of screw was 150 rpm and the diameter of die was 1.0 mm, respectively. In Soluplus-included formulations, ultrafine AGN powder was mixed with the determined ratios of Soluplus and then extruded with high shear stress. Prepared AGN-based formulations were dried in the oven at 40°C. Water absorption studies Water absorption-related indexes were determined in triplicate, according to the reported method with slight modifications [24]. One gram of sample was suspended in 30 mL of DW at room temperature, gently stirred for 1 h, and then centrifuged at 3,000 rpm for 20 min. The supernatant was decanted into an evaporating dish of known weight. Related parameters, such as water absorption index (WAI), water solubility (WS), and swelling power (SP) were calculated by the following formulas: Particle size analysis of aqueous dispersion of formulations Each developed AGN formulation (0.3 g) was suspended in 30 mL of DW, and the supernatant was separated by centrifugation at 3,000 rpm for 20 min. The particle-related properties of the supernatant (particle size and polydispersity index) were studied using a light-scattering spectrophotometer (ELS-Z1000; Otsuka Electronics, Tokyo, Japan). Determination of major components in AGN EtOH extract The content of D, DA, and DOH in the AGN EtOH extract (ext) was determined by liquid chromatography-tandem mass (LC-MS/MS) system. An aliquot (5 μL) of AGN EtOH-ext dissolved in methanol (1 μg/mL) was injected onto LC-MS/MS system equipped with an Agilent Technologies 1260 Infinity HPLC system (Agilent Technologies, Wilmington, DE, USA) and Agilent Technologies 6430 Triple Quad LC/MS system. The chromatographic separation of D and DA was achieved by using XTerra MS C18 3.5 μm column (150 × 2.1 mm; Waters, MA, USA) with a C18 guard column (4 × 2.0 mm; Phenomenex, CA, USA). The mobile phase consisted of acetonitrile (A) and 5 mM ammonium formate buffer (B) at a flow rate of 0.2 mL/min. The gradient elution program used was as follows: (1) The ratio of mobile phase A was set at 10% and maintained for 1 min, (2) a linear gradient method was run until it became 70% in 45 min, (3) a linear gradient was run back to make it 10% in 0.1 min and maintained until the pump pressure returned to the initial value. The total run time was 55 min. The fragmentation transitions of D and DA were identical to each other: m/z 329.2 to 229.1. The Extraction test Each sample of AGN formulation (0.3 g) was added to 30 mL of DW, pH 1.2 buffer, and pH 6.8 buffer, and incubated in the shaker at 40°C for 2 h. Next, the mixtures were filtered to separate the supernatant and sediment, and the supernatants were dried. The amounts of D and DA in those samples were quantitatively analyzed by a high performance liquid chromatography ( In vivo pharmacokinetic study The in vivo pharmacokinetic properties of DOH after oral administration of AGN-based formulations were studied in male Sprague-Dawley (SD) rats (250 ± 5 g of body weight; Orient Bio, Sungnam, Korea). The rats were reared in a light-controlled room at 22 ± 2°C and at 55 ± 5% relative humidity. The experimental protocols of animal studies were approved by the Animal Care and Use Committee of the College of Pharmacy (Seoul National University, Seoul, Korea). The left femoral artery was cannulated with an Intramedic polyethylene tube (PE-50; Becton Dickinson Diagnostics, MD, USA) under 50 mg/kg of Zoletil (intramuscular injection; Virbac, Carros, France) anesthesia. Each formulation was suspended in DW and administered orally at doses corresponding to 100 mg/kg EtOH ext of AGN. Corresponding dose of each formulation was determined based on the sum of D and DA contents in AGN EtOH ext, analyzed by described LC-MS/MS method. Blood samples (200 μL) were collected from the femoral artery at determined times (5, 15, 30, 60, 90, 120, 240, and 480 min), and the equivalent volume of normal saline (containing 20 U/mL heparin) was supplemented at each sampling time. Blood samples were centrifuged at 16,000 × g at 4°C for 3 min, and aliquots (70 μL) of supernatant were stored at -70°C before analysis. DOH concentration in rat plasma was determined using a LC-MS/MS system as described in previous section with slight modification. Losartan (5 μL, LST, internal standard) solution (10 μg/mL) and 95 μL of acetonitrile were added to a 50 μL aliquot of plasma sample and mixed for 5 min. After centrifugation at 16,000 × g for 5 min, the aliquot (5 μL) of supernatant was injected into the LC-MS/MS system equipped with an Agilent Technologies 1260 Infinity HPLC (Agilent Technologies) and Agilent Technologies 6430 Triple Quad LC/MS system. The extraction recovery of DOH from plasma samples was 96.50 ± 2.58%. The LC and MS conditions for DOH analysis were same with described method. The fragmentation transition for LST was m/z 423.4 to 207.3. The fragmentor voltage and collision energy were 115 V and 20 eV for LST. The retention time of LST was 0.47 min. Acquisition and processing were performed with MassHunter Workstation Software Quantitative Analysis (Version B.05.00; Agilent Technologies). Linearity was established in the range of 2-10,000 ng/mL DOH concentration. Lower limit of quantitation (LLOQ) was 2 ng/mL. Precision was within 5.9%, and accuracy was ranged from -7.0 to 10.5% in this analysis method, respectively. The following pharmacokinetic parameters of DOH were calculated by WinNonlin (Version 3.1; Pharsight, Mountain View, CA, USA): total area under the plasma DOH concentration-time curve from time zero to time infinity (AUC), maximum concentration (C max ), and the time of maximum concentration observed (T max ). Histological study The toxicity of developed formulations on the intestinal epithelium was evaluated by histological assay. EtOH ext of AGN and AGN-loaded formulations (F2, F5, and F8) were orally administered to SD rats at a dose of 100 mg/kg (AGN EtOH ext), and rats were euthanized 24 h post-administration. The jejunum was dissected and fixed in 10% (v/v) formaldehyde solution for 1 day. Tissues were rinsed with tap water, dehydrated with alcohols, and embedded in paraffin. Tissues were then cut into 5-10 μm thick sections and stained with hematoxylin and eosin (H&E) reagent. Microscopic images were taken to assess the mucosal toxicity of the developed formulations. In vivo memory and learning tests Imprinting control region (ICR) mice (male, 4-week-old, 25-30 g of body weight) were purchased from Daehan Biolink Co., Ltd. (Chungbuk, Korea) and reared at 20 ± 3°C under a 12/ 12-h light-dark cycle with access to food (commercial pellet) and water ad libitum. All animal experimental procedures were approved by the Institutional Animal Care and Use Committee (IACUC) of Kangwon National University (KIACUC). AGN EtOH ext, F8, and donepezil (1 mg/kg) were dissolved in 0.5% CMC-Na solution, and scopolamine was dissolved in normal saline (0.9% NaCl). In case of AGN EtOH ext and F8-treated groups, oral doses were corresponded to 200 mg/kg of AGN EtOH ext. As described, each dose was based on the contents of the sum of D and DA. Both control and scopolamine alone treated groups received 0.5% CMC-Na solution orally. The positive control group was administered donepezil orally. Following 90 min of AGN EtOH ext and F8 administrations, scopolamine was subcutaneously administered in four groups, excluding the control group. Trials of the Morris water maze test and passive avoidance test were performed 30 min after scopolamine administration. The Morris water maze test was performed based on previous reports with slight modifications [25,26]. The water maze consisted of large circular pool (90 cm in diameter and 40 cm in height) filled with opaque water (20 ± 1°C) to a depth of 30 cm by using white milk. The water maze was divided into four equal quadrants. A white escape platform was submerged in one of the pool quadrants, and its location was fixed across 4 days. The same quadrants were not used for the starting point during the test trials. All swimming activity was monitored and recorded using a video camera linked to a smart video-tracking system. The time to reach the platform was recorded as the escape latency. If mice did not find the platform within 120 s, they were guided to the platform, kept there for 10 s, and the escape latency was recorded as 120 s. On the last day, a probe trail test was done by removing the platform for 60 s. The swimming time in the quadrant where the platform was placed was recorded to evaluate the memory function. The passive avoidance test was performed according to previously reported methods [25,26]. A passive avoidance apparatus (Gemini system, San Francisco, CA, USA) has two compartments (17 cm × 12 cm × 10 cm) with an electrifiable grid floor separated by a guillotine door. The drug administration protocol for this test was identical to that used for the Morris water maze test. The training trial was performed on the first day. When the mice completely moved from the light to dark compartment after guillotine door opening, the door was closed automatically, and a 2 s electric foot-shock (0.1 mA/10 g body weight) was delivered through the grid floor. The test trial began 24 h after the training trial, and the elapsed time before mice entered the dark compartment (latency time) was measured. If a mouse waited more than 180 s in the light compartment, then it was excluded from this experiment. Statistical analysis All experiments in this investigation were performed at least three times, and the data were presented as the mean ± standard deviation (SD). Statistical analyses were done by two-tailed Student's t-test or analysis of variance (ANOVA). p values less than 0.05 indicate a statistically significant difference. Results and Discussion Preparation and characterization of AGN-based formulations AGN-based oral formulations were prepared using the HME technique (Fig 1). Prior to the HME process, AGN was pulverized into ultrafine particles. Different shear stress and weight ratios of Soluplus were tested to optimize the HME process (Table 1). Soluplus was included as a polymer matrix for oral solid formulations of AGN, and the temperatures in the barrel section were set as presented in Table 2 during the HME process. Since the glass transition temperature (T g ) of Soluplus is around 70°C and the heating zone of barrel can be set 15-60°C above the T g of the polymer [16], established temperatures in the barrel section seem to be appropriate. The degree of shear stress can be explained by SME. SME can be defined as the consumption of mechanical energy during extrusion process per mass flow rate [23]. It is dependent on the conditions of extrusion process such as screw speed, barrel temperature, water content, feed composition, and screw configuration. Different screw configuration used in this study can produce different degree SME values. As shown in Table 3, SME value was increased in the order of low < middle < high shear stress. To put AGN into the HME machine, the particle size of AGN powder was reduced. After coarse milling, the mean diameter of particles (F1) was 329.3 ± 2.4 μm (Table 4). To further reduce particle size, coarse powders (F1) were milled into ultrafine particles (F2), which have a mean diameter of 47.9 ± 2.5 μm (Table 4). Particle size reduction, from coarse to ultrafine, by milling processes is shown in the particle size distribution chart (Fig 2). The narrow size distribution of coarse and ultrafine particles (F1 and F2) is also presented. The particle size ranges of milled AGN powders have already been reported [11]. According to the SEM images of F1 and F2, an irregular shape was observed (S1 Fig). With ultrafine AGN particles (F2) processed by HME, oral AGN-based formulations were prepared according to the composition and processing conditions detailed in Tables 1 and 2. Due to the high surface area to volume ratio of F2 compared to F1, the HME process with ultrafine particles (F2) is expected to produce a more homogeneous dispersion of poorly water-soluble matrix components. Pulverized AGN powders, not in the form of active components of AGN, were put into the HME machine with or without Soluplus. Cytoskeleton components of the AGN root (i.e. celluloses) seem to participate as principal matrices for solid formulations. Throughout the heated barrel, AGN, with or without Soluplus, was processed into homogeneous extrudates by heating and shear stress (dependent on the screw configuration). Melted extrudates were successfully prepared by the described processing conditions and pulverized into powder form for further studies. Water absorption studies Water absorption-related activities of developed AGN-based formulations were evaluated using three parameters: WAI, WS, and SP. As shown in Table 5, WS values of HME-processed formulations (F3-F8) were higher than the milled particles of AGN (F1 and F2). Notably, the higher weight ratio of Soluplus induced an increase in WS values (F5-F8). HME processing with high shear stress and the addition of Soluplus can make more channels responsible for the permeation and penetration of water into the core of matrix. Using SEM images, we observed the formation of multi-pores on the surface of Soluplus-included formulations (F8), which were not observed in the absence of Soluplus in F5 (S2 Fig). In contrast, WAI and SP values were lower in the HME-processed and higher in the Soluplus-included formulations (F3-F8) than the milled particles of AGN (F1 and F2). Reduced WAI values indicate that a greater soluble component of AGN can be obtained by processing AGN with higher shear stress and the addition of Soluplus. Increased WS values in AGN and Soluplus-based groups with HME-processing suggest that poorly water-soluble components were efficiently extracted in the aqueous environment. Particle size analysis of the aqueous dispersion of formulations After centrifugation, the supernatant of the aqueous dispersion of developed formulations was obtained, and its particle size and size distribution were measured (Fig 3 and Table 5). Although nano-sized dispersion was observed in all groups, the application of higher shear stress and the addition of Soluplus produced smaller particles (<200 nm). Moreover, the size distribution of that group (F8) was narrower and more uniform (unimodal), than the other groups (Fig 3). Nanosized (<200 nm) and narrowly distributed particles can contribute to the enhanced intestinal permeation of the active components of AGN. Homogeneous dispersion of the active components of AGN in the matrix, achieved by high shear stress and the addition of Soluplus, may be related to the separation of nano-sized particles from the developed formulations. It is assumed that these nanoparticles are derived from components of AGN and Soluplus after HME processing. Extraction test The amount of D and DA, the active components of AGN, extracted from AGN in water was quantitatively analyzed after 2-h incubation in different media (Fig 4). The solubility of D and DA in AGN EtOH ext, AGN particles (F1 and F2), and HME-processed formulations (F3-F8) was measured in DW, pH 1.2 buffer, and pH 6.8 buffer. The buffers at pH 1.2 and 6.8 simulate gastric fluid and intestinal fluid, respectively. Thus, they can be used to predict the amount of D and DA released in the gastrointestinal tract after oral administration. The contents of major compounds in the prepared AGN EtOH ext are as follows; 61.00 ± 12.63 mg/g (D); 49.30 ± 12.13 mg/g (DA); 2.19 ± 0.039 mg/g (DOH). The influence of processing temperature of HME on the major components (D and DA) of AGN was also investigated (S3 Fig). Even after 6 h incubation (longer than HME processing period) of AGN EtOH ext at 100°C, the contents (%) of D and DA were maintained around 90%. It implies that the major components of AGN can be preserved in the HME processing condition. As shown in Fig 4, the solubilities of D and DA in the F7 and F8 groups were significantly higher than those in the AGN EtOH ext group (p < 0.05) in all kinds of media (DW, pH 1.2 and 6.8 buffers). The solubilities of D and DA in the AGN EtOH ext in DW were 1.68 ± 0.23 and 1.50 ± 0.20 mg/g (the amount of D or DA per the weight of each sample), respectively. The extraction efficiencies, which can be defined as the ratio (%) between the solubility in the medium and the content included in the EtOH ext, of D and DA in AGN EtOH ext were less than 5% in DW and buffers (pH 1.2 and 6.8). Particularly, the solubilities of D and DA in F8 (AGN: Soluplus = 50:50) in DW were 9.91-and 9.14-fold higher than those in the AGN EtOH ext group, respectively. In both pH 1.2 and 6.8 buffers, the solubilites of D and DA in the F8 group were significantly higher than those in the AGN EtOH ext group. In the F8 group, the values at pH 1.2 were 20.12-and 19.08-fold higher than those in the AGN EtOH ext group. In addition, at pH 6.8, 12.23-and 11.23-fold increase in the solubilities of D and DA in F8 was observed compared to the AGN EtOH ext group. HME processing with high shear stress and the addition of a higher percentage of Soluplus seem to improve the dissolution of the poorly water-soluble components D and DA in this study. This result can be explained by the increased WS value (Table 5). Specifically, Soluplus induced the formation of pores on the surface of the AGN-based matrix, and it seemed to be related to the highly efficient extraction of the active components of AGN in the aqueous milieu (S2 Fig). Its solubility-and dissolutionenhancing effects of poorly water-soluble components were already reported [27,28]. Although the crystalline/amorphous states of the active components of AGN were not demonstrated in this study, the molecular dispersion of the active components in the matrix seemed to occur by melting (induced by the temperature of the barrel) and high shear (generated by screw configuration). The enhanced solubility of D and DA in the F8 group observed in the simulated gastrointestinal fluids could lead to the improvement of their intestinal absorption. In vivo pharmacokinetic study The pharmacokinetics of DOH was investigated in rats after oral administration of AGN EtOH ext, F2, F5, or F8 (Fig 5 and Table 6). Previous reports have found that orally administered D and DA are metabolized to DOH in the liver [8,29]. Therefore, we evaluated the oral absorption of D and DA by measuring DOH concentration in plasma. DOH content in AGN has been reported to be much lower than both D and DA [2], and data quantitatively analyzed by LC-MS/MS from our preliminary study with AGN EtOH ext confirms this finding. Thus, after oral administration of AGN-based formulations, DOH concentration in plasma can be linked to the extent of D and DA intestinal absorption. It is known that DOH administration has a protective effect against memory impairment in mice [30]. Doses for oral administration were determined based on the equivalent amounts of D and DA in each formulation, as measured by LC-MS/MS. AGN EtOH ext, F2 (ultrafine particle of AGN), F5 (HME product with high shear), and F8 (Soluplus-included HME product with high shear) were used to delineate the influence of ultrafine milling, HME processing, and the addition of Soluplus on the oral absorption of D and DA. As shown in Table 6, the relative fraction absorbed (F rel ) followed this order: AGN EtOH ext < F2 < F5 < F8. The relative oral bioavailability of DOH in the F8 group was 8.75-fold higher than that of the AGN EtOH ext group. By comparing AUC values between 4 groups, the influence of ultrafine milling, HME processing, and the addition of Soluplus was obvious. C max values of formulations (F2, F5, and F8) were also higher than that of the AGN EtOH ext group. The improved oral bioavailability of F8 can be explained by the enhanced extraction of D and DA in gastrointestinal fluids (Fig 4). As reported [31], the mucosal absorption enhancing effect of Soluplus for poorly water-soluble drugs may also contribute to the improved oral bioavailability of AGN components. Histological assay The acute toxicity of developed formulations on the intestinal epithelium was evaluated by histological assay (Fig 6). The influence of AGN EtOH ext, F2, F5, and F8 on the intestinal epithelium was assessed by the H&E method. As shown in Fig 6, the application of prepared solid formulations did not induce toxicity on mucosal structures, including microvilli and cell junctions. Pathological changes including inflammation and erosion were also not detected in these groups (F2, F5, and F8). Soluplus, included in the F6-F8 formulations, is a grafted copolymer consisting of polyethylene glycol (PEG) 6000, vinylcaprolactam, and vinyl acetate. Its oral LD 50 value was higher than 5 g/kg, according to the manufacturer's data (BASF SE, Ludwigshafen, Germany). Considering the dose of AGN formulations in this study, the amount of Soluplus administered is less than the LD 50 value. Thus, Soluplus, at the given weight ratio, can be used to make oral solid formulations of AGN by HME processing without severe toxicity. These findings suggest that these orally administered formulations can be used safely. In vivo pharmacological efficacy test F8 exhibited the highest systemic exposure to DOH after oral administration in our pharmacokinetic study in rats ( Fig 5); thus it was selected for further in vivo memory-impairment tests along with the EtOH ext of AGN (Figs 7 and 8). The memory enhancing effect of AGN EtOH ext and F8 on scopolamine-induced memory impairment was evaluated by the Morris water maze test and passive avoidance test in mice. Scopolamine is a muscarinic cholinergic receptor antagonist and is known to induce memory dysfunction in behavioral tests. The Morris water maze test was performed to evaluate hippocampus-dependent spatial memory and learning. In the Morris water maze test, spatial learning and memory were assessed in mice by monitoring the latency to locate the platform. The effect of AGN EtOH ext and F8 on the escape latency was evaluated by the Morris water maze test (Fig 7). The control group exhibited shorter escape latencies during the test trials than the scopolamine-treated group. The scopolamine-treated group maintained a constant latency to escape across 4 days. Donepezil, used as a positive control, had a lower escape latency value for those 4 days than the scopolamine-treated group. Notably, F8 had a comparable escape latency value to that of the donepezil-treated group and a significant reduced escape latency value compared to the scopolamine-treated group (p < 0.05). Learning and memory in mice were also assessed by a probe trial test on the last day of the examination ( Fig 7B). Mice receiving scopolamine treatment displayed a shorter swimming time in the target quadrant than the control group. After scopolamine treatment, the F8-administered group spent more time in the target quadrant (p < 0.05). Moreover, in scopolamine-treated mice, the latency period of the F8-treated group was comparable to that of the donepezil-treated group. These findings suggest that the oral administration of F8 could improve long-term memory. We performed the passive avoidance test to compare the memory enhancing effect of AGN EtOH ext and F8 (Fig 8). The passive avoidance test was used to evaluate memory based on fear motivation. Latency time was defined as the period required to move from the light to the dark compartment after exposure to foot shock. In the training trial, there was no significant difference in latency time between all groups. This result indicates that drug administration did not influence the training trial. In the test trial, the scopolamine-treated group exhibited a decreased latency to escape compared to the control group (p < 0.05). Both the AGN EtOH ext and F8-treated groups displayed an increased latency time in the test trial compared to the scopolamine-treated group. Particularly, the F8-administered group presented a significantly increased latency time compared to scopolamine-and AGN EtOH extadministered groups (p < 0.05). Considering all these findings, F8 has a more potent memory-enhancing effect than AGN EtOH ext. This result may be due to a greater increase in the AUC value for the F8-treated group than the AGN EtOH ext group (Fig 5 and Table 6). Therefore, this AGN and Soluplusbased formulation processed by HME may be useful as an efficient oral formulation for the treatment of memory impairment. Oral solid formulations based on AGN and Soluplus were prepared by HME processing. Before HME, AGN was pulverized into particles and their physicochemical properties were investigated. Under HME processing with high shear with the addition of Soluplus, AGN-based extrudates were produced. The extracted amounts of AGN active components (D and DA) from F8 formulation, were increased in simulated gastrointestinal fluids compared to the AGN EtOH ext group. The relative oral bioavailability of DOH in F8-administered rats was 8.75-fold higher than that of the AGN EtOH ext group. In scopolamine-induced memory impaired mice, the cognitive enhancing effect of AGN/Soluplus-based formulation (F8) was significantly greater than the AGN EtOH ext group in the Morris water maze test and passive avoidance test. AGN and Soluplus-based solid formulations developed by HME technique can accomplish the improvement of the aqueous solubility, dissolution, and intestinal absorption of poorly water-soluble components of AGN and subsequent enhancement of memory. HME equipped with twin screw for generating high shear can alter the physicochemical properties of AGN components. Enhanced oral absorption and memory ameliorating effect can contribute to reduce daily intake amounts of AGN. Moreover, versatile processability of extrudates into various dosage forms, such as powder, granule, tablet, and capsule, can increase the usefulness as a dietary supplement. Soluplus-included solid formulation prepared by HME can be a promising carrier for oral delivery of phytochemicals. The influence of AGN EtOH ext and F8 on the scopolamine-induced memory impairment mice, evaluated by passive avoidance test. # p < 0.05, compared to control group; *p < 0.05, compared to scopolamine-treated group; + p < 0.05, compared to (scopolamine + AGN EtOH ext)-treated group. Two-tailed t-test is used for the statistical analysis. Each point represents mean ± SD (n = 6).
7,639.8
2015-04-27T00:00:00.000
[ "Biology" ]
TüKaPo at SemEval-2020 Task 6: Def(n)tly Not BERT: Definition Extraction Using pre-BERT Methods in a post-BERT World We describe our system (TüKaPo) submitted for Task 6: DeftEval, at SemEval 2020. We developed a hybrid approach that combined existing CNN and RNN methods and investigated the impact of purely-syntactic and semantic features on the task of definition extraction. Our final model achieved a F1-score of 0.6851 in subtask 1, i.e, sentence classification. Introduction By all accounts, the first reliable English dictionary was written by Samuel Johnson 1 and published on 15 April, 1755. It was 18 inches tall, 20 inches wide when opened, and contained 42,773 entries. It took Dr. Johnson 7 years to complete, and yet it was missing the word contrafibularity, amongst others. 2 In the 265 years that have passed since, the world has evolved at a blisteringly fast pace with technology integrating itself ever more closely with our lives. However, the compilation and maintenance of dictionaries and lexicons -one of the most important and authoritative sources of meaning -continues to be the exclusive field of domain experts and lexicographers. Nevertheless, with the recent advances in natural language processing, this area -like many others that deal with human language -has seen a growing interest in automating the development of such resources. Definition extraction is defined as the automatic identification of definitional knowledge in text, modeled as a binary classification problem between definitional and non-definitional text. In the early days of definition extraction, rule-based approaches leveraging linguistic features showed promise. Westerhout (2009) used a combination of linguistic information (n-grams, syntactic features) and structural information (position in sentence, layout) to extract definitions from Dutch texts. Such approaches, however, were found to be dependent on language and domain, and scaled poorly. Later research incorporated machine learning methods to encode lexical and syntactic features as word vectors (Del Gaudio et al., 2014). Noraset et al. (2017) tackled the problem as a language modelling task over learned definition embeddings. Espinosa-Anke et al. (2015) derive feature vectors from entity-linking sources and sense-disambiguated word embeddings. More recently, Anke and Schockaert (2018) use convolutional and recurrent neural networks over syntactic dependencies to achieve very good results on the WCL and W00 datasets (Navigli and Velardi, 2010;Jin et al., 2013). This paper describes our approach of combining existing methods over state-of-the-art techniques that involve the use of contextualized word embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) in an attempt to determine if the former still offer avenues of optimization that can help them perform competitively with the latter. Task Background The DeftEval shared task (Spala et al., 2020) is based around the English-language DEFT (Definition Extraction From Texts) corpus (Spala et al., 2019). It consists of annotated text extracted from the following semi-structured and free-text sources. Compared to similar existing definition extraction corpora such as WCL (Navigli and Velardi, 2010) and W00 (Jin et al., 2013), the data offered by the DEFT corpus is larger in size (23,746 sentences; 11,004 positive annotations) while also providing finer-grained feature annotations. The shared tasks consists of three subtasks: 1) Sentence Classification (classify if a sentence contains a definition or not), 2) Sequence Labeling (label each token with BIO tags according to the corpus specification), and 3) Relation Classification (label the relations between each tag according to the corpus specification). We participated in the first subtask. The test data for the first subtask is presented in the following format: BIN TAG is 1 if SENTENCE contains a definition, 0 otherwise. During training for the first subtask, the training and development datasets were converted into the same format as the test dataset using a script provided with the corpus. A positive label was associated with every sentence that contained tokens with B-Definition or I-Definition tags; all other sentences were associated with a negative label. Baseline We developed and iterated on both LSTM-based (Hochreiter and Schmidhuber, 1997) recurrent and convolutional (O'Shea and Nash, 2015) neural network models. Our RNN architecture is a network of a single bidirectional LSTM layer followed by two feed-forward layers and a final sigmoid-activated read-out layer. This architecture is implemented by model BL-RNN (Baseline RNN) whose input layer accepts sequences of feature vectors. Our hybrid-CNN architecture is implemented by model BL-CNN (Baseline CNN), which is based on the work by Anke and Schockaert (2018). It accepts feature vector sequences that are passed through a one-dimensional convolutional filter and a max-pooling layer, followed by a single BiLSTM and read-out layers. The intuition behind combining convolutional and recurrent layers is to leverage the implicit local feature-extraction performed by the convolutional layers to refine the final representation passed to the recurrent layer, which accounts for global features. The input sequences are composed as concatenations of vectors of individual features at the token level, resulting in a homogenous representation, e.g. each token is encoded as the concatenation of a n-dimensional word vector, a m-dimensional one-hot encoded POS tag vector, etc. We conducted several experiments with the above two architectures and iterated on successful models. The provided corpus was pre-split into train and dev splits. A 90-10 split was performed on the train split to generate the validation set; the dev split was used as the test data as-is. All models were trained for 100 epochs with an early-stopping mechanism that monitored the validation loss over the last 10 epochs. Batch size was set to 128, and ADAM (Kingma and Ba, 2014) was used as the binary cross-entropy optimizer. URLs were stripped from token sequences as a preprocessing step. Sentences were parsed with spaCy 3 using the en core web lg model 4 to obtain POS tag sequences and dependency relation data. The results of our experiments are listed in table 1. The reported figures were averaged over three iterations of each experiment. Influence of Syntactic & Semantic Information Our initial experiments were premised on the hypothesis that neural definition extraction can be primarily modelled on morphosyntactic features while excluding or restricting the use of semantic and lexical information. By limiting the influence of semantics, we expected to train a model that generalized well over multiple domains by virtue of being less susceptible to lexical cues that could potentially act as distractors. To test this, we trained multiple models on a combination of (embedded) word token, POS and dependency relation features. With the exception of one model that trained its own word embeddings, word embedding matrices of other models with word token features were initialized with 300-dimensional pre-trained GloVe (Pennington et al., 2014) and word2vec (Mikolov et al., 2013) embeddings respectively. The GloVe and w2v embeddings were trained on the Common Crawl and Google News corpora respectively. Most interestingly, the model that was exclusively trained on syntactic information was the one that performed the worst. Virtually all other models out-performed the syntax-only model even when they were only trained on word tokens. This fundamentally proved our hypothesis to be flawed, further corroborated by the minimal effect of network architecture on the results. These findings indicated that syntactic features were the least informative when used by themselves and the most informative when used in concert with semantic information provided by word embeddings. The corollary of the same suggests that word embeddings -pre-trained or otherwise -are able to approximate rudimentary information about syntax that would would otherwise be offered by syntactic features like part-of-speech tags. It also follows that combining both kind of features in a complementary manner should enable the model to perform better. Feature Modelling Building upon the findings of the previous experiments, we tested the effect of combining punctuation and part-of-speech tags. It was immediately evident that replacing the PUNCT POS tag with the punctuation character occurring at that position had a positive effect on the model's performance. Beyond the implicit increase in information offered by the actual character, it also reaffirms the importance of syntactic features in this task. The addition of dependency relation features, however, had a less-immediately obvious impact. The RNN model saw a reduction in performance while the hybrid-CNN model fared better. Upon further investigation, we determined that the input encoding scheme's attempt to homogenize feature vectors across disparate features, viz., combining sequential (token-level) features (token, POS) with non-sequential (sentence-level) features (dependencies), actually hindered the recurrent model from optimally exploiting the former. With this key insight, we were able to rearchitect our model to learn a representation that composes both token and sentence-level features in an separate but efficient manner. Final Architecture Our final architecture is informed by the findings of our previous experiments. It accepts three inputs: At the token-level, both token and part-of-speech tags (w/t punctuation) are used. Pre-trained GloVe embeddings are used for tokens, while embeddings for POS tags are learned on-the-fly. The concatenation of both embeddings is passed through two "feature extraction units" that consist of a BiLSTM (to target sequence/global information) and a 1D-Conv + MaxPool layer (to target local information). At sentence-level, dependency information is encoded as the concatenation of the embeddings of the head word, modifier word and dependency label of each relation. This is connected to two stripped-down "feature extraction units" without the BiLSTM layer, since dependency relations are sequentially independent. Finally, the extracted representations of both token and sentence-level features are concatenated and connected to a feed-forward layer and then a read-out layer. The separation of feature-extraction at token and sentential levels allows their information to be combined at a higher level in the network. And we indeed see a marked improvement when this model is trained with dependency information. The model achieved a best F1-score of 0.76 during development. Results & Discussion The final model achieved a positive-class F1-score of 0.6851, ranking 47th out of 56 submissions for the first subtask. While the model under-performed in a substantial departure from our expectations, we identified factors that may have contributed to it. Since the gold-standard data for the test set was We also found several incongruities in the corpus where contradictions in annotations led to an ambiguous ground-truth. Consider the following sentences from the training corpus: "Organisms are individual living entities." and "Organelles are small structures that exist within cells." The first sentence was annotated with the positive class (contains a definition) even though the latter was not. Another similar albeit more ambiguous example: "Recall from The Macroeconomic Perspective that if exports exceed imports, the economy is said to have a trade surplus." and "If imports exceed exports, the economy is said to have a trade deficit." Here, the second sentence is tagged as containing a definition even though the first is not. While some of these ambiguities can be attributed to how the training data for the binary classification task is generated from the larger sequence-annotated corpus, there are many other counter-examples where the rationale behind the annotation is unclear. Such incongruities ultimately make it more challenging for the model to attain a clear and optimal generalization. Conclusion We presented our system for definition extraction whose pre-BERT methods achieved an admittedly pre-historic F1-score of 0.6851 in Task 6: DeftEval, subtask 1. Future work could potentially include the customization of the architecture to incorporate ensemble training, exploring the usage of more task-specific cues such as topical information, and perhaps even becoming one with the BERT side and using contextualized word embeddings -In light of our experiments with the combination of syntactic and semantic features, the ability of the model to implicitly reproduce the classical NLP pipeline (Tenney et al., 2019) makes it a natural fit for the task. However, not all languages and domains have the ample amount of text resources required to (pre-)train large Transformer-based models such as BERT, not to mention the increasing computational costs of training such models. Therefore, one should not lightly dismiss the advantages of linguistically-motivated, task-specific approaches in favour of more general, task-agnostic ones.
2,753.8
2020-01-01T00:00:00.000
[ "Computer Science" ]
Ferroelectric $\pi$-stacks of molecules with the energy gaps in the sunlight range Ferroelectric $\pi$-stacked molecular wires for solar cell applications are theoretically designed, in such a way that their energy gaps fall within visible and infrared range of the Sun radiation. Band engineering is tailored by a modification of the number of the aromatic rings and via a choice of the number and kind of the dipole groups. The electronic structures of molecular wires and the chemical character of the electron-hole pair are analyzed within the density functional theory (DFT) framework and the hybrid DFT approach by means of the B3LYP scheme. Moreover, it is found that one of the advantageous properties of these systems - namely the separate-path electron and hole transport - reported earlier, still holds for the larger molecules, due to the dipole selection rules for the electron-hole generation, which do not allow the lowest optical transitions between the states localized at the same part of the molecule. INTRODUCTION The ability to absorb a wide range of Sun spectrum and convert this energy into the voltage between the electrodes is a key factor of the efficient solar battery. Therefore, the optically active materials ideally should be the composite materials of small-, middleand wide-bandgap semiconductors, in order to cover the whole radiation range from the soft ultaviolet (350 nm) to the far infrared of a sunset (1400 nm). This possibility is offered by the multilayers of planar molecules or arrays of molecular wires. Recently, the so-called covalent organic frameworks (COF) attract an attention due to their special optical, transport, and catalytic properties, as well as easy fabrication [1][2][3][4]. From a point of view of the photovoltaics, stacks of the COFs are similar to the bulk or integrated heterojunctions [4,5]. By a change of the planar bonds between the molecules from the covalent to hydrogen bonds, one can restrict the electronic transport to the direction across the layers, i.e. along the stacks, while the electronic transport within the planes shall be suppressed. This is advantageous for the solar batteries, where the planar current would cause only a dissipation of an energy. Therefore, in the previous works [6,7], we studied the layers of molecules with the COOH terminal groups as the connecting parts for building the networks. The COOH group possesses also a small dipole moment. The ferroelectrically ordered molecules, composed of benzene rings and two dipole groups (COOH and CH 2 CN), arranged in the π-type stacks of layers or the molecular wires, show many appealing effects [6,7]. In particular, the energy levels of the subsequent layers (or molecules in a stack) are aligned in a cascade, and this holds for the valence and conduction band as well [6][7][8]. Each layer is simultaneously a donor and acceptor of electrons and holes, depending on a direction of the carrier motion [6]. The excitons in such layers are localized and have the charge-transfer character from the dipole group to the aromatic central ring. The electric field generated by the ferroelectrically ordered dipole groups leads to a polarization which is induced at electrodes. This effect for the graphene sheets, chosen as the electrodes, causes a change of the work function by ±1.5 eV for the anode and cathode, respectively [6]. Moreover, the electrons and holes move across such π-stacks along different paths: the electrons through the central rings and holes between the dipole groups [7]. The carrier mobilities, obtained with the relaxation time estimated due to the elastic scattering and ionic intrusions, are higher than those in the organometal halide perovskites [7,9,10]. For all the above reasons, it is interesting to investigate further properties of the ferroelectric molecular layers and stacks, in order to bring these systems closer to the experimental and industrial interest [11]. In this theoretical work, we focus on the rules which govern the bandgap change. The energy gap should fall into an interesting for us range of the Sun radiation. Recently, a similar system (to the cases investigated by us) has been theoretically and experimentally investigated, namely 2D imine polymer [12]. The authors revealed that a bandgap tunning by expanding a conjugation of the backbone of the aromatic diamines is possible in this material [12]. It is also well known that the bandgap decreases with a size of a system [13,14]. However, without calculations, the exact value of the energy gap is difficult to predict, as well as its dependence on a symmetry and the edge termination [14,15]. We have begun our study by choosing a type of molecules, which could be most promising for our purpose. Fig. 1 presents some of the molecules which are analyzed in this work. The collection of the geometries of all other studied systems, not presented in figures here, is included in the supporting information. We have chosen three dipole groups: COOH, CH 2 CN, CH 2 CF 3 , and their combinations in various repetitions. The size of the mesogenic aromatic part was enlarged linearly. The number of the benzene-type rings is given in our notation by an integer number following the "b" letter. Firstly, we investigated an effect of a number of the aromatic rings and number of dipole groups on a size of the energy gap; it means, the difference between the energetic positions of the lowest unoccuped and the highest occupied molecular orbital (LUMO-HOMO). Secondly, we checked an effect of mixing various chemical groups as the terminal dipoles in one molecule. We estimated also an effect of the π-stacking. Finally, we analyzed how the molecular modiffications -which were done in order to tailor the bandgap size -affect the exciton (electron-hole) character and the separation of the charge carrier paths. THEORETICAL METHODS The molecular calculations have been performed with the Gaussian code [16], using the correlation-consistent valence doublezeta atomic basis set with polarization function cc-pVDZ [17]. Molecular geometries were optimized with the hybrid-functional method in the B3LYP flavor [18], which mixes the density functional theory (DFT) [19] in the BLYP [20,21] parametrization with the Hartree-Fock exact exchange in 80% and 20%, respectively. The optimized atomic structures were used to build the molecular wires. Further calculations for the molecules and 1D structures have been performed with the Quantum ESPRESSO suite of codes [22]. This package is based on the plane-wave basis set and the pseudopotentials for the core electrons. The normconserving pseudopotentials were used with the energy cutoff for the plane-waves set to 35 Ry. Moreover, some of the results, such as optimized intermolecular distances, were checked to be the same as using the larger energy cutoff of 45 Ry. The intermolecular distances were obtained within the local density approximation (LDA), since it is known to give better geometries than the generalized gradient approximation (GGA). The LDA-optimized separations between molecules in the stack were then used for the B3LYP calculations for the molecular wires. The uniform Monkhorst-Pack k-points mesh in the Brillouin zone [23] was chosen for 1×1×10 for the wires. For the B3LYP scheme, we used the meshes 1×1×9 and 1×1×3 for the k-and q-point grids, respectively. In order to obtain the band structures projected onto the local groups of atoms, we employed the wannier90 package [24], which interpolates bands using the maximally-localized Wannier functions [25,26]. The same tool has been used for the calculations of the dipole moment, which can be obtained from the positions of the maximally-localized Wannier centers, r n , using the formula [27] where Z a and R a are the atomic pseudopotential charge and its position, correspondingly, and indexes a and n run over the number of atoms and Wannier functions, respectively. Dipole moment There are a number of important implications of using the dipole groups: i) the hydrogen bonds via the COOH groups within the planes restrict the electronic transport and dissipation of energy in the directions perpendicular to the photovoltaic path, ii) the polarization generated across the solar device orders the energy levels in a cascade [6], iii) the electronic transport along the πstacks is restricted to the π-conjugated rings for the electrons, and holes move between the dipole groups [7], iv) an adsorption of the optically active molecules at the surfaces of the transparent conductive oxides, used as the electrodes, leads to the high power conversion when it is realized with the COOH group [28]. Therefore, it is usefull to examine various dipole groups for their impact on the energy gaps. In table 1, the dipole moments in the direction parallel to the photovoltaic transport for all studied groups attached to one aromatic ring are collected. LUMO-HOMO energy differences A collection of molecules with various lengths of the aromatic chains is gathered in Fig. 2. The energy gaps are presented on a scale of the Sun radiation activity from the soft ultraviolet (350 nm) to the far infrared of a cloudy sky (1400 nm). The impact of various dipole groups on the LUMO-HOMO energy difference does not change with the group types, when they are attached to the long chains (above five benzene rings). Moreover, increasing a number of the dipole groups does not change the energy gap for the longer molecules. Due to the lack of this effect, we can focus on the transport properties when a design of the dipole groups is considered. It is a well known fact, that the energy gaps obtained with the density functional theory -both in the local density approximation (LDA) and the generalized gradient approximation (GGA) -are underestimated, while the energy gaps from a pure Hartree-Fock method are largely overestimated. Thus, we used the hybrid-functional scheme by means of the B3LYP functional which contains 20% of the exact exchange. The presented series of the energies could be an approximation of the optical gaps -when just a tendency of the size and chemical group effect on the gaps is studied. The experimental data are usually closer to the combined GW+BSE approach taking into account both the quasi-particle and the excitonic binding energy effects [29,30]. However, from the presented data, it is obvious that a realization of the highly efficient -from the light-absorption point of view -matrices of molecular stacks is possible. Especially, if one combines the layers with columns of various molecules, which have different size and different absorption profiles. Impact of the π-stacking for the energy gaps The absorption efficiency is correlated with the thickness of the photoactive layer (Beer-Lambert Law), which cannot be too small making the material to be transparent [31]. Hence, all 2D structures for the solar cells should be examined for their properties across the layers. Stacking causes the band dispersions in the direction perpendicular to the slab. This bandwidth broadening, in turn, acts for the closure of the energy gap. In Table 2, we present an effect of the wire formation on the band gap, and compare the LDA and B3LYP computational methods for chosen systems. The last column in Table 2 displays the intermolecular distances obtained with the LDA method. The separation of benzenes, by 3.8 Å, is not much larger than that of the graphene multilayers, which is around 3.4 Å [32]. The most distant are molecules with the CH 2 CF 3 groups, of 5.2 Å. This is due to the fact that they are the largest of all atomic groups attached to the rings studied here, and the F atoms do not attract the H atoms at the bottom of the upper neighbor. The COOH groups are the smallest here, but separations of the molecules terminated by them are also large -of 5.1 Å -because the oxygen atoms from the neighboring rings repel each other. The CH 2 CN groups, although they are also quite large, attract each other between the neighboring rings. This is because N and C tend to "exchange" hydrogen, and this effect leads to the smallest intermolecular distances, of 4.6 Å. The separations of molecules in a wire rule an effect of the bandgap size in the stack; this effect is the strongest for the benzene wires and molecules containing the CH 2 CN groups. It is interesting to note, that an addition of COOH to the rings with other dipole groups weakens an effect of stacking on the band gap. This holds even at the same intermolecular distance (see for instance b1(CH 2 CF 3 ) 3 and b1(COOH,CH 2 CF 3 ) 3 , or b1(COOH) 3 and b1(COOH) 6 in Table 2). The origin of this effect will be more clear in the next subsection. Comparison of the LDA and B3LYP approaches for the isolated molecules and wires, usually, exhibits a bit stronger effect of the stacking on the band gap for the hybrid-functional scheme. Also the origin of this effect is similar to that of an addition of COOH, and relies on the order of the energy levels. Because the experimental results for the band gaps of the molecules studied here do not exist yet, it is not easy to determine how good is the B3LYP method in this case. However, one could expect that this method will work similar as in the case of benzene. In order to evaluate effectiveness of the B3LYP method used in our study, we compared energies for benzene calculated by different theoretical methods with experimental result. The energy gap of the benzene molecule (isolated) of 6.72 eV by means of the B3LYP should be compared with the value from the GW approach, of 10.5 eV [33], because both approaches do not take into account the excitonic effects. However, the measured optical gap is around 3.6 eV [34], due to the effect of large exciton binding energy in small molecules. This excitonic effect can be theoretically obtained from the difference of the gaps obtained with the GW and the GW+BSE (BSE means Bethe-Salpeter equation). For benzene, the GW+BSE band gap is around 3.1 eV [30]. Although the B3LYP scheme is much simpler then the GW and GW+BSE methods, and designed for the fundamental gap only, its results are closer to the optical absorption than these of the GW aproach. Therefore, we expect that the energy gaps obtained by us with the B3LYP method show the same trends in a series of similar molecules -which differ only with a size or number or type of the dipole groups -as the optical measurements. Order of the energy levels The characters of the highest occupied and lowest unoccupied states determine the excitonic radius and binding energy, the oscillator strength of the absorption of light, as well as the electronic transport properties. In the previous work [7], we showed that one of the systems studied here, namely a wire of b1(COOH,CH 2 CN) 3 molecules, possesses very localized state at the conduction band minimum (CBM) and the valence band top (VBT). Under the applied voltage, the electrons moved from the central aromatic ring of the molecule to the neighboring ring, while the holes hopped between the dipole groups. This property might reduce a recombination of carriers and is very desired for solar cell devices [35]. Therefore, we characterize the states around the band gap for the wires composed of some of the π-stacked molecules, studied in this work for their absorption energy. Plots of the projected density of states (PDOS) for wires of benzene and pentacene decorated with various dipole groups are collected in Fig. 3. The choice for the smallest molecule (benzene) is motivated by a need of making a comparison (of the results for the decorated benzene) to our previous studies [7], using the hybrid-DFT method. While the pentacene molecule has been chosen, since it possesses the gap in the sunlight range -and in the same time varying a type and an arrangement of the dipole groups. We varied the type and the arrangement of the dipole groups attached to these molecules in order to check the transport properties. As one will see further, changing the mesogenic part of the molecule does not alter the conclusions. All results are obtained with the BLYP functional, except for only one case -namely b1(COOH,CH 2 CN,CH 2 CF 3 ) -for which the PDOS is calculated also with the B3LYP method. Figs. 3(g) and 3(h) compare these two methods and lead to the following conclusions: 1) The orders of the energy bands are the same -specifically, the O-projected states are close to the Fermi level, and N-projected DOS is deeper in the energy, while the C-ring PDOS is between the O-and N-projected states. 2) The only difference is for the width of the energy band localized at the C-ring -it is more narrow in the B3LYP case and leads to the higher value of the PDOS at the edge of the valence band. Since using the more compuationally expensive method does not change the conclusions, we continue with the DFT approach for the PDOS analysis. When the COOH or CH 2 CN dipole groups are attached to a single benzene ring, the highest occupied levels are composed of the states localized at dipoles and partially on the central aromatic ring, while the lowest unoccupied states are built mainly of the C-ring localized orbitals. The oxygen orbitals are closer to the Fermi level than states of the N origin. The fluorine states are the deepest in the energy of all studied dipole groups. Moreover, if only CH 2 CF 3 groups are attached to benzene then both holes and electrons are predicted to move through the central ring, which is a very unwanted situation. Summarizing results in Figs. 3(a)-(h): the best space separations of the carrier paths are for CH 2 CN and COOH. Combinations of these groups work as well. The COOH group is used as a connecting part for the planar structures, as studied in the previous work [6]. It has been demonstrated [28], that using the COOH group for a deposition of the optical material at the transparent electrodes in the solar cell devices one can obtain the highest photovoltaic conversion of all experimentally tested contacts. Thus, only combinations of the COOH and CH 2 CN dipoles attached to pentacene molecules are studied further, because they represent the group of molecules which have the energy gaps falling within a considered range of the Sun spectrum. In Figs. 3(i)-(o), for each studied case, the highest occupied states of the dipole-group origin are separated from the Fermi level by at least one band. It seems that the carriers separation, found for the decorated benzene in our previous studies [7], may not work very well for larger molecules. However, the PDOS for the dipole localized states is very high, which might cause that the oscillator strength for the absorption is larger than that for the band positioned at the Fermi-level. The calculations of the optical properties by means of the GW+BSE aproach are necessary for a full insight. On the other hand, when one builds the planar systems with π-stacking, and adds the electrodes, and applies the voltage -which leads to the Stark shift of the energy levels -then the hole transport between the dipole groups omitting the central aromatic frame might be plausible. Additionally, we checked a few more possible connections of two aromatic rings, through: (i) the acetylene (-Ac-), or (ii) nitrogen (-N-) bridges, or (iii) -C-C-bond. The PDOS of naphthalene and bi-phenyl with four CH 2 CN groups, and similar systems with the above bridges, are presented in Figs 4(a)-(d). There is only a little improvement of the positioning of the dipole groups in the PDOS, and it is when the -N-bridge is used, with respect to the other possibilities. This means that after an addition of COOH, the light holes will move between the carboxyl groups and the heavy holes between the CH 2 CN dipoles, if many terminal groups are attached to the edges. Figs 4(e)-(f) display the projected DOS for the decorated coronene molecule. They confirm earlier findings for the linear aromatic molecules. As promised earlier in this work, we comment now on the two stacking effects mentioned in the preceding subsection. The strength of the bandgap lowering due to stacking is related to the bands broadening close to the Fermi level. This, in turn, is a function of a strength of the interaction between the neighboring molecules, and of course the distance between them (the closest neighboring groups of atoms are the CH 2 moieties below the molecular plane and the CN or CF 3 tops of the dipoles below). Since addition of the COOH group moves the levels of other dipoles down in the energy, the band broadening of these deeper states does not affect so much the energy gap. Therefore, addition of COOH weakens the effect of stacking on the gaps of molecules with CH 2 CN or CH 2 CF 3 dipoles. In the same way, the B3LYP method -which lowers a contribution of the dipole-group projected DOS at the Fermi level with respect to the BLYP -leads to a smaller stacking effect on the band gap. The latest effect is due to the fact that, the C-ring states of the neighboring molecules originate from the groups of atoms, which are more distant than the neighboring dipoles in the stack. Absorption spectra of three molecules in a series of the growing size of the mesogenic part At the end, we compare the theoretical absorption spectra for a series of three molecules with growing number of benzene rings in a chain, i.e. 2, 5 and 9, but having the same number and type of the dipole groups, namely four COOH moieties attached on the both sides of the longer molecular axis, as in Fig. 1 for b9(COOH) 4 (i.g. b9X4) and in Fig. 3(i) for b5X4. These spectra are simulated with the Yambo code [36], using its possibility to calculate the dielectric function. The response function was obtained on the random phase approximation level. In Fig. 5, both the interacting (Im ε) and noninteracting (Im ε 0 ) dielectric functions show the blue shift of the dominant absorption peaks with the growing molecular size. The imaginary part of the noninteracting dielectric functions have the absorption edges at the energies which correspond to the DFT energy gaps -which are: 2.99 eV for b2X 4 , 0.84 eV for b5X 4 , and 0.04 eV for b9X 4 . Measured absorbance is more similar with the interacting dielectric function, which has the prominent peaks between 2 and 4 eV for b5X 4 and b9X 4 , and between 4 and 6 eV for b2X 4 . The GW+BSE spectrum would be shifted up due to the many-body effects and down due to the excitonic effects. Nevertheless, our main message that it is possible to tune the optical spectra of the dipole decorated molecules in the π-stacks is still valid. Im ε b2X4 b5X4 b9X4 The second optimistic message is connected to the transport properties, which are correlated with the oscillator strenth of the dipole optical transitions and creation of the electron-hole pairs. As we see from the interacting dielectric function, the lowest absorption peak is much higher in the energy than the HOMO-LUMO gap. This means that the optical transitions between the DOS peaks which are the most close to the Fermi level (on its occupied and unoccupied sides) are not allowed. These peaks were of the same origin, namely they were localized at the central C-rings and not the dipole groups. Instead, the higher energetic positions of the first abroption peaks in these molecules suggest that the allowed transitions are between the C-rings and dipole groups. This is a wanted property, because the electron-hole pairs will be generated on the spacially distant parts of the molecule. Thus, we retain the space separation of the charge transport for electrons and holes again, for the larger molecules than benzene. CONCLUSIONS Our aim is to tailor the band gap in the molecular π-stacks, in order to propose systems -with ferroelectric properties investigated earlier [6,7] -for the solar cell applications. Tunning the optical properties is possible by varying a number the benzene rings and a choice of the dipole groups when the molecules are small. While in cases of larger molecules, with longer aromatic chains, the band gaps are almost independent on the terminal groups. On the other hand, a choice of the dipole groups and their number are critical parameters for the atomic localization of the highest occupied and lowest unoccupied states. Therefore, the character of the photogenerated electron-hole pair is sensitive to the chemical connections between the neighboring molecules in the stacks and this, in turn, determines the transport properties [7]. Summarizing: the linear chains, between five and nine of the aromatic rings, are good candites for building blocks of the organic nanostructes for photovoltaic applications, when they are terminated with the COOH and CH 2 CN groups. While using the CH 2 CF 3 groups does not give the desired properties. The dipole selection rules for the optical transitions retain the charge-paths selectivity, which appears to be lost when looking just at the PDOS. Table 2 of the paper.
5,824
2017-01-11T00:00:00.000
[ "Materials Science", "Physics", "Chemistry" ]
Study on the Development and Use of E-commerce in the Special Region of Yogyakarta with De Lone and Mc. Lean IS Success Model The high growth of e-commerce in Indonesia is influenced by several things, such as the quality of human resources and internet network infrastructure. The utilization of information technology in running a trading business or often known as e-commerce for small companies can provide flexibility in production to the final delivery process of a transaction. This includes the Special Region of Yogyakarta (DIY), which is one of the areas with the highest level of e-commerce service users in Indonesia. Although DIY is one of the areas with the highest level of e-commerce service users in Indonesia, the level of e-commerce business activity in Indonesia, the level of e-commerce business activity is still relatively new, so there are still many shortcomings in its implementation. Therefore, research needs to be done to study the development and utilization of e-commerce in the Special Region of Yogyakarta (DIY). Researchers use e-commerce metrics suggested by De Lone and Mc. Lean (2004) as the foundation of the instrument. Research data processing using Smart Partial Least Square (Smart-PLS) 3.0. The analytical model used in this study is a structural equation model (SEM) and inductive analysis using the goodness of fit model (inner model) research, which determines the suitability of a model used in this study. as retail of necessities, health, and donations. The method is very easy because traders just send a QRIS photo, scan it, select the gallery icon, take a QRIS photo, enter nominal, enter a PIN, click pay, save, and send proof of seller. The increasing number of users is also influenced by the quality of information and the quality of services presented on each e-commerce page. Information quality is defined as the ease with which consumers can obtain information related to the product they are looking for. Consumers will rely on the descriptions and photos provided by the website to understand the product (Putri & Punjani, 2019). Sharma & Lijuan (2015), added that the quality of information is indirectly also closely related to service quality. Service quality refers to how well the services provided by internal providers or companies to consumers will affect consumer confidence so that the buying process will occur, meaning that the quality of information and services will determine consumer satisfaction as end-users. The use of e-commerce which is very closely related to information technology is also closely related to the concept of the cyber city. The concept of a cyber city is described as an area with adequate information technology infrastructure in terms of integrated network connectivity, bandwidth capacity, wireless and cable internet, and the installation of Wi-Fi hotspots in a few open places such as public areas. So far, the concept of the cyber city has been widely applied in a few areas around the world, including the Special Region of Yogyakarta (DIY). DIY is one of the regions with the highest level of e-commerce service users in Indonesia, below Bandung and Jakarta with 81.3%. The high level of users of this service is not only supported in terms of the implementation of the cyber city concept in DIY but is also influenced by the number of universities in DIY which is one of the highest regions in Indonesia so that it is conducive to encouraging the establishment of e-commerce businesses or startup businesses. In addition, the level of internet use in the business sector in DIY, which is among the highest, is also the cause of the high level of users of ecommerce services in DIY. Micro, small and medium enterprises businesses (MSMEs) is one of the causes of the increasing level of users of e-commerce services in the Special Region of Yogyakarta. Changes in business behavior patterns also affect changes in business models. As of October 2021, as many as six and a half million merchants in 34 provinces, 480 districts/cities, and 85 percent of them are MSMEs, have been recorded to have used QRIS. This is supported by 57 licensed Payment System Service Providers (PJSP). Details of the number of merchants registered as QRIS users include large businesses as many as 324 thousand, medium enterprises 614 thousand, small businesses 1.5 million, micro-enterprises 4 million, and donations/social 15 thousand. The business model currently being implemented is innovative and not inventive. Inventive is finding something but not applied in everyday life. While innovative is finding something and applying the findings in everyday life. Thus, innovators who carry out innovative activities are creative people in creating something to be applied in everyday life. Meanwhile, to accommodate business models through e-commerce that continue to grow, it is necessary to use information, communication, and technology systems or ICT (Kontolaimou & Skintzi, 2018). E-commerce is also referred to as the application of ICT in business and commerce. The concept of e-commerce has developed in recent years and has led to the economic growth of several countries becoming more developed and developing. As a country that has quite a lot of young people like India, on average, they do online shopping activities rather than directly. Young residents have high enthusiasm and want to learn new things including innovative ways of shopping through online media. This activity makes the company work hard to make changes to the sales strategy. They turned a lot of sales into an online strategy. The use of ICT is done to make and complete business transactions. The role of the internet is very important to shape the company's strategy to conduct ecommerce activities. ICT greatly influences the development of the e-commerce industry (Kumar, et.al., 2014) From the previous explanations, the level of e-commerce business activity in Indonesia, especially DIY, is still relatively new, so there are still many shortcomings in its implementation. These shortcomings include the inadequate internet network in several areas, especially in rural areas, and the concept of payment, which is still a problem for most residents, especially the cash on delivery (COD) system whose cases often appear on the surface. On this basis, the authors are interested in conducting a study of the development and use of ecommerce in the Special Region of Yogyakarta. E-commerce concept E-commerce comes from electronic commerce. Loudon & Loudon (2014) explain that e-commerce refers to commercial transactions carried out digitally between individuals and companies. They explained that the concept of ecommerce development stems from the development of information technology which will affect the business transformation of each company in line with the demands of an increasingly modern and rapidly growing era. The existence of ecommerce provides many opportunities to sell directly to consumers, through intermediaries, such as distributors or retail outlets. The presence of the concept of e-commerce can also reduce purchase transaction costs because it can significantly eliminate intermediaries in the distribution channel. Bukht & Heeks (2018) in their research add that e-commerce is a form of broad coverage of the existence of a digital economy which means that the application of e-commerce is very dependent on adequate information technology which includes distribution, purchasing, sales, service, public processes. relations, or through other innovations that may continue to grow. Digital economy Today's business development is strongly influenced by technology that can unify the relationship between new economic concepts through e-commerce and business strategies to form a very close relationship. The digital economy or digital economy is a concept of economic activity that utilizes the assistance of information and communication technology (ICT) which includes e-commerce practices as an example of its application. This includes the process of buying and selling transactions, marketing, and other operational activities. The practice of ecommerce is the application of communication service practices to the application of product marketing strategies and the use of increasingly advanced internet developments (Zimmerman, 2000). The digital economy or digital economy is part of the output that comes solely from digital technology. This means that current and future business model innovations will be influenced by the development of the digital economy. Bukht & Study on the Development and Use of E-commerce in the Special Region of Yogyakarta with De Lone and Mc. Lean IS Success Model (Daniel Joel Immanuel Kairupan, Rudy Badrudin, Yakobu Aminu Dodo) 183 Heeks (2018), explained that the digital economy is part of the application of ecommerce which is also highly dependent on other digital sectors such as hardware manufacture, software and IT consulting, information services, and telecommunications. Laudon & Laudon (2014) state that an information system is a collection of components that function to collect, store and process data and aims to provide information, knowledge, and digital products that work together to achieve a goal. Information systems are a skill that must be possessed by businesspeople who are influenced by changes in the business environment that tend to experience extraordinary changes. Every company must have a high-quality information system to be able to provide maximum benefits for users. De Lone and Mc. Lean in 1992 has conducted a study to define the success of an information system consisting of 6 variables. Such as system quality, information quality, intention to use, user satisfaction, individual impact, and organizational impact. Then they modify the existing findings in 3 main dimensions, namely information quality, system quality, and service quality. Another modification is the elimination of individual impact, and organizational impact as a separate variable which then replaces it with net benefits (Bahari & Mahmud, 2017). Chong et.al., (2010) explained that the model can be carried out in an e-commerce environment as follows: Information Systems 1. System quality measures the desired characteristics of an e-commerce system. These characteristics include factors such as reliability, adaptability, and ease of access when used. 2. Information quality measures the quality of information generated through e-commerce systems. The quality of information can also identify the impact of the information and content related to the data presented. 3. Service quality is the overall support provided by web-based service providers. The indicators used are assurance, empathy, and responsiveness which are considered as measures of service quality. 4. Intention to use (use) measures everything from website visits and navigation within the site to information retrieval and transaction execution. User satisfaction is an important means of measuring customer opinions about e-commerce systems and should cover the entire customer experience cycle from information-seeking through purchase, payment, acceptance, and service. 6. Net benefits are the most important measure of success, as they capture the balance of positive and negative impacts of e-commerce on customers, suppliers, employees, organizations, markets, industries, economies, and even society. Figure 2. Path Diagram This model has been applied by many researchers into their research form, including Chong et. al., (2010) who successfully applied the method in the business to consumer (B2C) framework of the student loan industry; Angelina et. al., (2019) related to the application of the model to the use of e-commerce in Indonesia; and Dorobâț (2014) who managed to measure the success of elearning using this method. From some of the explanations above, the hypotheses that can be concluded are: H1: System quality significant effect on the intention to use (use), H2: System quality significant effect on user satisfaction, H3: Information quality significant effect on the intention to use (use), H4: Information quality significant effect on user satisfaction, H5: Service quality significant effect on the intention to use (use), H6: Service quality significant effect on user satisfaction, H7a: Intention to use (use) significant effect on user satisfaction, H7b: User satisfaction significant effect on the intention to use (use), H8: Intention to use (use) significant effect on net benefits, H9: User satisfaction significant effect on net benefits. Research design The research design used is confirmatory research to test the effect of system quality, information quality, service quality, intention to use, user satisfaction, and net benefits variables. This research includes field research by collecting data from the field. Meanwhile, for the time dimension, this study is included in the category of cross-sectional research which is a study in which information or data collected from research subjects are used only once in one time to answer the problem formulation (Sekaran and Bougie, 2016). The question instrument used adopted questions that had been developed by previous researchers using a 5-point Likert scale ranging from strongly disagree (1 point) to strongly agree (5 points). 185 The analytical model used in this study is the structural equation model (SEM). This research data can be carried out simultaneously with model testing, measurement, and structural model testing. The researcher continues to examine the structural model of Figure 2 where De Lone and Mc. Lean (2004) suggest a two-way relationship between system use (use) and user satisfaction (user satisfaction). In SEM the relationship is referred to as indeterministic and this study runs two separate structural models, one model with a causal relationship from use to satisfaction (H7a) and from satisfaction to use (H7b) (Chong, et. al., 2010). Population and Sample The technique used is purposive sampling, namely the technique of determining the sample with one thing considered and the consideration is the residents who live in DIY who have or are currently using e-commerce. The minimum sample size is 100 people because it is based on five times the estimated parameters (Hair et.al., 2014). Data was collected through the distribution of online questionnaires using google forms. Research data processing using Smart Partial Least Square (Smart-PLS) 3.0. Research Instruments This study uses e-commerce metrics suggested by De Lone and Mc. Lean (2004) as the foundation of the instrument. There are 28 items for various constructs and variables. This research was conducted on 4 e-commerce namely Lazada, Bukalapak, Tokopedia, and Shopee. There are 6 variables used to measure the research model, namely system quality (SQ), information quality (IQ), service quality (SeQ), intention to use (U), user satisfaction (US), and net benefits (NB). Inductive analysis using Partial Least Square (PLS) includes goodness of fit model research (inner model) which serves to determine the suitability of a model used in this study using 6 variables. The correlation between constructs is measured by path coefficients and the level of significance which is then compared with the research hypothesis. The significance level used is 5% and the data analysis technique used is Warp PLS 6.0 software so that R-square can be obtained as a measure of goodness-of-fit (Chin & Newsteed, 1999). Table 2 shows that most respondents are female (65.11%). Meanwhile, the working status was dominated by employees(?) (49.58%). Test Measurement Model Before the analysis based on the structural equation model with Smart-PLS 3.0, validity and reliability tests were carried out to ensure the adequacy and accuracy of the data for further analysis. The measurement model test was carried out using the Smart-PLS 3.0 software. 1. Validity Test Ghozali (2014) explains that the AVE value of each variable must show a score above 0.5, which means that the data on all variables is declared valid. (2021) In table 3 the quality variable has an Average Variance Extracted (AVE) value of 0.555, the service variable is 0.600, the satisfaction variable is 0.674, the user variable is 0.746, the benefit variable is 0.766 and the information variable is 0.785. Table 4 shows that the composite results and Cronbach's alpha values of all the variables tested in this study were declared reliable. Sharma (2016) states that if all latent variables have composite reliability and Cronbach's alpha above 0.7, the research data is declared reliable. Model Feasibility Test The Goodness of Fit (GoF) index value approach was carried out to test the feasibility of the model in this study by finding the R² value of the dependent variable and the Average Variance Extracted (AVE) value on each latent variable (Tenenhaus, et. al, 2004). The results of the model feasibility test carried out twice can be seen in Table 5. Hypothesis Test Results SEM was run on the data twice due to the inability of the SEM software to handle the two-way relationship between the two constructs (system use and user satisfaction), one run for the model with H7A (system use and user satisfaction) and the other with H7B (user satisfaction and system use). Overall, the fit of De Lone & McLean's e-commerce model is quite good, indicating that it has great potential for an e-commerce research framework. In both analyzes, it was found that 2 of the 9 hypotheses were not significant, of which one of the two hypotheses was negative and not significant. Discussion The results of hypothesis testing with models 7a or 7b showed the same results. H1 is not accepted which means that system quality does not affect the intention to use (use). System quality is expected to provide more value for consumers to visit e-commerce company websites. However, it turns out that the system quality available in the form of reliability, adaptability, and ease of access when used does not affect the intensity of consumers to visit certain e-commerce websites to increase. This result is the same as the research conducted by Angelina et.al., (2019) which states that system quality does not have a significant relationship to use. In their research, they argue that although e-commerce has advantages in system quality, it does not have a significant impact on the use of ecommerce because of the trust factor. Chong et.al., (2010) in their first research model also stated that these two variables did not have a significant relationship. An organization should focus on the quality of the information in a web service. The relationship of system quality to use can be affected by the novelty of the system. Marjanovic et.al., (2016) explained that the relationship between system quality and intention to use (use) can be influenced by factors in the development of information systems in internal e-commerce. For example, every e-commerce must have a system that can continue to grow. There need to be new potentials that can increase the success of e-commerce which aims to increase the level of usage. Based on the results of subsequent studies that H2 is accepted which means that system quality has a significant effect on user satisfaction. The significance of testing this hypothesis can be interpreted that the higher the quality of the website system, the more user satisfaction will increase. This study has the same results as research from Angelina et.al., (2019) and Chong et al., (2010) where users of e-commerce websites assess the quality of the system based on the system is easy to use or not, the system can convey the information needed or does not take a long time is an indicator for users of the quality of the system in question. If these indicators have been met, the user will feel satisfied with the system. Chong et al., (2010) also emphasize that the factors that cause system quality to significantly affect user satisfaction are convenience and system integration. As we already know, e-commerce today has developed rapidly and is increasingly integrated into the system, which is very useful and helpful for users. In the third hypothesis test, it was found that there was a significant influence between information quality and intention to use (use). The significance of testing this hypothesis can be interpreted that the higher the quality of the information provided by the website, the intention to use the service will increase. Ong & Ruthven (2009) in their theory argues that the quality of information can be interpreted as measuring the quality of the content of the information system which can be seen from the accuracy of information, timeliness, and relevance of information. The accurate and precise quality of information will provide a better experience for users. So that when users feel that when the information needed can be met properly, the user's intention to use these services, whether using the website, will increase. Angelina et.al., (2019) explains that the intention to use the service is closely related to the provision of good quality information so that later it will form a sense of satisfaction with the products offered. The test results show that H4 is accepted which means that information quality has a significant effect on user satisfaction. This shows that if the quality of information is getting better and more complete, then customer satisfaction will increase. Angelina et.al., (2019) explains that providing clear information will provide satisfaction for customers. Yandi & Septrizola (2019) in their research explain that the quality of information is one of the keys to success in increasing the satisfaction felt by consumers when opening certain e-commerce company websites. The quality of information provides a shopping experience of its own in the era of industrial technology as it is today. The element of information quality becomes the greatest value for the formation of user satisfaction so that if the information provided to consumers is inaccurate or not by the actual state of the product, it will give distrust or dissatisfaction with these services. In testing the fifth hypothesis, it explains that service quality influences the intention to use these services. The better the quality of service, the higher the intention to use it. So that the third hypothesis is accepted. This study has similar results conducted by Angelina et.al., (2019) and Chong et.al., (2010). They argue that good service quality will increase the intention to use these services. This is because services that have good capabilities and experience are believed to be able to solve user problems if there are obstacles or complaints. For example, customer care services that can be contacted 24 hours a day are ready to help at any time. Subsequent testing states that service quality has a significant influence on customer satisfaction. So it can be interpreted that hypothesis 6 is accepted. This study explains that if the quality of service increases, then customer satisfaction will also increase. Chong et.al., (2010) have the same research results. In theory, it is explained that user satisfaction is largely determined by the quality of information and services. So that all service companies must focus on providing clear information and services. Especially if the service company has a B2C (business to consumer) focus (Angelina et.al., 2019). Although several other studies have stated that these two variables have no effect, Kotler & Keller (2016) state that service quality is a factor that influences customer satisfaction. The results of testing hypothesis 7a state that the intention to use variable is negative and does not have a significant effect on user satisfaction. In this study, it is stated that the p-value is 0.270 where the value is greater than 0.05 which proves that the intention to use does not affect user satisfaction. So this hypothesis is rejected. As explained in the previous test results that user satisfaction is influenced by several factors including service quality and information quality. The results of hypothesis testing with the H7b model are accepted, which means that user satisfaction affects intention to use (use). User satisfaction is expected to provide more value for consumers to visit e-commerce company websites. Angelina et.al., (2019) and Chong et.al., (2010) also have a similar opinion. Their findings also complement the theoretical statement of De Lone and Mc. Lean (2004) which explains that user satisfaction is the goal of measuring user opinions on an e-commerce system and must cover the entire user experience cycle while using e-commerce services. Satisfaction can be linked to future transactions that will influence the decision to reuse the system or not. So some previous research results concluded that satisfaction has a big influence on intentions to use e-commerce systems. Based on the results of subsequent studies that H8 is accepted which means that intention to use has a significant effect on net benefits. The significance of testing this hypothesis can be interpreted that the higher the user's intention to use the system, the more net benefits will increase. This result is also supported by research by Angelina et.al., (2019) which states that users will feel the net benefits of the product because of the support from the user's intention to use the system. This result is similar to the research conducted by Wu & Wang (2006) which showed that if there is a perceived net benefit of using e-commerce, then this will affect usage but on the contrary, if there is no perceived benefit then the use of the system will not be affected. The test results show that H9 is accepted which means that user satisfaction has a significant effect on net benefits. This shows that the net benefit is strongly influenced by the increase in customer satisfaction. The results of this study are the same as Angelina et.al., (2019) and Chong et.al., (2010) which states that user satisfaction provides value for achieving the net benefits of an e-commerce service system. De Lone and Mc. Lean (2004) explain that customer satisfaction is one of the most important indicators in terms of achieving net benefits. The existence of e-commerce makes users' lives easier, more time-saving, productive, and effective. The more users who feel the more benefits they get, the higher the satisfaction of e-commerce users. Chong et.al., (2010) stated in their research that the explanation for the weak relationship between the use of this system in the model would be either misspecification or just a representation of a real phenomenon, or would be both. CONCLUSION Based on this study, with the De Lone and Mc. Lean model, there is a significant influence between system quality and user satisfaction, information quality and intention to use, information quality and user satisfaction, service quality and intention to use, service quality, and user satisfaction, user satisfaction and intention to use, intention to use and net benefits, user satisfaction and net benefits. Most of these models show a model fit that is also able to test the use of e-commerce in the Special Region of Yogyakarta. On the other hand, there is no influence, namely system quality and intention to use and intention to use does not have a significant effect on user satisfaction, which means that the intention to use e-commerce services for users in the Special Region of Yogyakarta is not only due to the quality of the system owned by e-commerce services. Likewise, the satisfaction of users of e-commerce services for users in the Special Region of Yogyakarta is not influenced by intention to use. So that companies engaged in e-commerce services can focus on achieving net benefits by paying attention to the variables of customer satisfaction, service quality, and system quality. One of the limitations faced by researchers is the inability to measure the reciprocal relationship between intention to use and user satisfaction in a structural equation modeling analysis. This could be due to an error in the model specification. SEM was unable to measure two-way relationships (H7a and H7b) simultaneously in one analysis, failing to represent the original conceptual relationship in the DeLone & McLean e-commerce model. Further research can use other structural equation modeling analysis tools to obtain other results.
6,272
2022-09-20T00:00:00.000
[ "Business", "Economics" ]
Biomimetic cartilage scaffold with orientated porous structure of two factors for cartilage repair of knee osteoarthritis Abstract A dual-layer biomimetic cartilage scaffold was prepared by mimicking the structural design, chemical cues and mechanical characteristics of mature articular cartilage. The surface layer was made from collagen (COL), chitosan (CS) and hyaluronic acid sodium (HAS). The transitional layer with microtubule array structure was prepared with COL, CS and silk fibroin (SF). The PLAG microspheres containing kartogenin (KGN) and the polylysine-heparin sodium nanoparticles containing TGF-β1 (TPHNs) were constructed for the surface, transitional layer, respectively. The SEM result showed that the dual-layer composite scaffold had a double structure similar to natural cartilage. The vitro biocompatibility experiment showed that the biomimetic cartilage scaffold with orientated porous structure was more conducive to the proliferation and adhesion of BMSCs. A rabbit KOA cartilage defect model was established and biomimetic cartilage scaffolds were implanted in the defect area. Compared with the surface layer and transitional layer scaffolds group, the results of dual-layer biomimetic cartilage scaffold group showed that the defects had been completely filled, the boundary between new cartilage and surrounding tissue was difficult to identify, and the morphology of cells in repair tissue was almost in accordance with the normal cartilage after 16 weeks. All those results indicated that the biomimetic cartilage scaffold could effectively repair the defect of KOA, which is related to the fact that the scaffold could guide the morphology, orientation, and proliferation and differentiation of BMSCs. This work could potentially lead to the development of multilayer scaffolds mimicking the zonal organization of articular cartilage. Introduction Cartilage defects were thought to be a challenge for tissue engineering, due to its avascular character and the fact that only one cell type (chondrocytes) is present [1,2]. Mature cartilage is composed of different zones or layers, which vary in extracellular matrix (ECM) components and the orientation of the constituents [3,4]. Each zone is maintained by a unique combination of cellular, biomolecular, mechanical, and physical factors. Enzymatic degradation of extracellular matrix, deficient new matrix formation, cell death and hypertrophic differentiation of cartilage cells would lead to arthritis Knee osteoarthritis (KOA) [5,6]. Hence, the treatment of KOA is still a critical and urgent problem in orthopaedic treatment worldwide. Current therapies such as autograft transfer or autologous chondrocyte transplantation rarely restore the tissue to its normal state [7]. Fortunately, cartilage tissue engineering provides a promising alternative strategy [8]. There is a need for engineered scaffolds that recreate the zonal organization of articular cartilage for treating full-thickness articular cartilage defects. Natural or synthetic scaffolds [9][10][11][12] have been shown to support the development of cartilage tissue, but due to their simplified structure, their orientation of the constituents and mechanical behaviour is different from that of mature hyaline cartilage [13,14]. Stratified scaffolds which mimic the bone cartilage not only emulated the graded mechanical properties of articular cartilage but also had excellent biocompatibility, biodegradability and cell affinity [15]. Some kinds of natural materials and growth factors, such as collagen (COL), chitosan (CS), Kartogenin (KGN) and transforming growth factor-b (TGF-b), have been widely used in cartilage tissue engineering [16]. COL is the major constituent of the ECM components in natural articular cartilage [17]. In the outermost superficial zone collagen fibrils are oriented parallel to the articulating surface and in the middle and calcified zones, the collagen fibrils are random and perpendicular. COL has been widely used in cartilage tissue engineering due to its excellent biocompatibility, negligible immunogenicity, cell adhesion and biodegradability [18,19]. COL can also specifically interact with growth factors which promote cell ingrowth and remodelling [20,21]. However, COL use is commonly limited by its insufficient mechanical property and high release rate [22][23][24]. Chitosan (CS) is a kind of natural and good quality polysaccharide similar to glycosaminoglycan (GAG) in structure, which has been reported to enhance cell adhesion, mesenchymal stromal cells (MSCs) proliferation and functional expression of osteoblasts [16,25]. Some recent studies suggested that the degradation of hyaluronic acid sodium (HAS) naturally contained in the articular structures is related to inflammatory factors, enzymes, immune cells and oxidants present in the KOA articulation [26]. Intra-articular injection of HAS is current therapy for the treatment of osteoarthrosis [27]. It is known that HA downregulates inflammatory factors and restores the rheological properties of the synovial fluid [28]. As a natural biomaterial, SF has caught much attention in cartilage tissue engineering because of its desirable biocompatibility, immunogenicity, controllable biodegradability and high mechanical strength [29]. However, it is reported that SF has many benefits for enhancing the properties of bioactive molecules in vitro and in vivo [30,31]. Polymer microspheres and nanoparticles have been widely used to encapsulate protein or growth factor to preserve their activity and delivery, such as PLGA microspheres and polylysine-heparin sodium (PLL-HS) nanoparticles have been combined with scaffolds in order to achieve controlled release [32,33]. Growth factors encapsulated with polymer microspheres have been demonstrated effective in rendering excellent local promotion of cell adhesion, proliferation and growth [34,35]. Kartogenin (KGN) as a non-protein small molecule chondrogenesis inducing agent was first reported by Johnson et al. in 2012 [6]. Several studies of KGN on cartilage have been reported and the results showed that KGN can promote the healing of cartilage defect [36]. Zonal organization of articular cartilage during development is formed by controlling the secretion and spatial distribution of transforming growth factor-b1 (TGF-b1). The outermost superficial zone and the middle zone are maintained by TGF-b1 signalling. We here report a dual-layer biomimetic cartilage scaffold that mimics the distinct collagen orientation and ECM components of mature articular cartilage. The characterizations of morphology and mechanical property of the scaffold showed that the new scaffold had a structure similar to natural cartilage and possessed high mechanical strength. The chondroinductivity and cartilage repair ability of the duallayer biomimetic cartilage scaffold were further studied in vitro and in vivo. Extraction of silk fibroin (SF) SF was prepared by a previous method [38]. In brief, silkworm cocoons were boiled in a 0.5% Na 2 CO 3 solution (wt) for 30 min. Then it was washed thoroughly with distilled water to remove the glue-like sericine proteins and wax. The degummed silk fiber collected above was dissolved in a CaCl 2 /H 2 O/CH 3 CH 2 OH solution (molar ratio of 1:8:2) at 80 C for 1 h. The resulting mixture was dialyzed in distilled water with a cellulose membrane tube (molecular weight cutoff: 814 kDa) for 3 days. The concentration of the formed aqueous solution (1.343%, wt) was determined via measuring the dry weight of the resulting solution. Fabrication of the surface layer scaffolds The surface layer was made from COL, CS and HAS. The PLAG microspheres (MPs) containing KGN were loaded in the surface layer. The surface layer scaffold was prepared via freeze-drying method. We prepared five proportions of scaffolds (COL/CS/HAS/MPs, 0.5COL/CS/HAS/MPs, 0.1COL/CS/HAS/ MPs, COL/CS/0.5HAS/MPs and COL/CS/0.1HAS/MPs). PLGA MPs were prepared using double emulsion-solvent evaporation method. The preparation of the surface layer scaffold has been described in detail in the previous paper [38]. Fabrication of the transitional layer scaffold The orientated microtubules scaffold consists of COL, CS, SF and polylysine-heparin sodium nanoparticles containing TGF-b1. We prepared four proportions of scaffolds (COL/CS/ TPHNs, COL/CS/0.5SF/TPHNs, COL/CS/SF/TPHNs and COL/CS/ 3SF/TPHNs). The transitional layer scaffolds were fabricated using a modification of the unidirectional freeze-drying technique. The preparation of the surface layer scaffold has been described in detail in the previous paper [39]. Analysis of the optimal ratio of the surface layer and the transitional layer scaffolds A scaffold composed of different COL/CS/HAS salt and COL/ CS/SF ratios were evaluated by determining porosity, swelling, loss rate in hot water, mechanical property and cell proliferation to obtain optimum conditions for manufacturing porous scaffolds. Please refer to our previous reports on specific experimental methods [38,39]. Fabrication of the dual-layer scaffold The prepared COL/CS/0.1HAS/MPs scaffold was compressed and cut to obtain a 4 mm diameter surface layer scaffold. The prepared transition layer COL/CS/0.5SF/TPHNs scaffold adhered to the surface layer COL/CS/0.1HAS/MPs scaffold with a small amount of 1% SF. The dual-layer scaffold was dried in the ultra-clean platform. Microstructure characterization The morphology of the dual-layer scaffold was characterized using SEM (Philips-FEI XL30 ESEM-TMP, Eindhoven, the Netherlands). The samples were frozen in liquid nitrogen for 2 min and fractured into sections using a sharp blade. Then, the samples sprayed with gold were characterized using SEM. Mechanical properties of the dual-layer scaffold The mechanical property of the dual-layer scaffold was examined by universal material testing machine (Instron, Britain) at a cross-head speed of 1 mm/min. During the test, the samples were all compressed to 70% strain. The compressive strength and compressive elastic modulus were obtained from stress-strain data. Each measurement was repeated three times. Isolation and culture of BMSCs BMSCs were extracted from the bone marrow of a 4-weekold male SD rat. All the procedures were approved by Medical University of Fujian Institutional Animal Care and Use Committee. Briefly, the tibia and femur were removed and repeatedly flushed by a syringe containing 4 ml DMEM/F-12 with FBS (10%), penicillin (1%, 100 U/ml)-streptomycin (100 mg/ml) under aseptic environment. After that, BMSCs were placed in an incubator with 5% (v/v) carbon dioxide (CO 2 ) at 37 C. The medium was changed every 2 days, and the passage 3-4 cells were used for follow-up experiments. Cell proliferation assessment MTT assay was applied in this study to quantitatively assess the number of viable cells attached and grew on the duallayer scaffold. The scaffold was sterilized by gamma rays, and the BMSCs (1 Â 10 4 cells/mL) were co-cultured with scaffolds in 24-well plates. Then, MTT solution (5 mg/ml in PBS) was added to each well at different culture time points (1, 3, 5, 7 days) and incubated at 37 C with 5% (v/v) CO 2 for 4 h. Then, the DMSO (200 lL) was added to the top of each well to dissolve the formazan crystals. The absorbance of liquid in each well was measured at 490 nm using an enzyme-linked immunosorbent assay reader (DNM-9602, Beijing Perlong New Technology Co., Ltd., China). Cell viability assessment The viability of BMSCs cultured with the scaffold was evaluated by the AO/EB double staining. BMSCs suspension (1 Â 10 4 ) cells/ml were cultured in a mixture of culture medium and half of leach liquor of scaffolds in 24-well plates and then respectively incubated for 3, 5 and 7 days. Medium was changed every 2 days. A mixture of AO (100 lg/ml) and EB (100 lg/ml) was then added under dark environment for 10 min staining, followed by washing with PBS. Finally, the pictures of strained cells were observed under an inverted fluorescence microscope (Olympus, IX71, Tokyo, Japan). Establishment of animal model All animal studies were obtained at Medical University of Fujian Institutional Animal Care and Use Committee (Fuzhou, China). All animal surgery protocols were approved by Medical University of Fujian Institutional Animal Care and Use Committee. A total 16 male New Zealand white rabbits aged 16 weeks were used and randomly divided into four groups (the control group without any implants, the group containing the surface layer scaffolds, the group containing the transitional layer scaffolds, the group containing the dual-layer scaffolds). The anterior cruciate ligament of the knee joint of the rabbit was first cut off, and the KOA model was obtained after feeding for 1 month. One month later, the epidermis and muscle tissue of the knee joint were cut lengthwise in a sterile environment, and the surface of the knee joint was drilled with manual corneal trephine, and the animal model of KOA cartilage defect was obtained. In brief, the knee joint of rabbits were exposed through operation under general anaesthesia (Figure 1). A full-thickness cylindrical defect (4 mm in diameter and 5 mm deep) was created by a trephine bur in the middle of patellar groove of the rabbit's knees. The cartilage samples were harvested at 4, 8, 12, 16 weeks post-operation. Histological analysis The obtained articular cartilage samples were fixed with 4% paraformaldehyde solution for 7 days and then decalcified in 10% ethylenedi-aminetetraacetic acid solution for 2 months while embedded in paraffin. The samples were cut into 4-lm-thick section and stained with hematoxylin and eosin (H&E), alcian blue (A-B), toluidine blue (T-B) and safranin-o fast green (S-F). The biopsies were observed and imaged using a light microscope. For evaluating the repair effect of cartilage tissue, histological sections were scored according to a modified histological grading score [40]. The total scores range from 0 to 16, with 16 points referring to normal tissue. The samples were blindly scored by five independent observers according to the assess scale. Statistical analysis Results have been expressed as means and standard deviations. Statistical analysis was performed using one-way analysis of variance (ANOVA). The level of significance was defined at p < .05, and p > .05 was considered as statistically non-significant. Fabrication and characterization of the surface layer scaffold Scaffolds composed of different COL/CS/HAS salt were evaluated by physical characterization and in vitro cell experiment. Previous research has shown that the optimal ratio of COL/CS/HAS salt porous scaffold was 1:1:0.1. The COL/CS/ 0.1HAS scaffold has suitable porosity (50%), swelling (650%), loss rate in hot water (12%) and mechanical properties (0.32 MPa). Results of in vitro fluorescence staining and cell proliferation suggested that COL/CS/0.1HAS/MPs had good biocompatibility and the capability to promote bone marrow stromal cell proliferation. Fabrication and characterization of the transitional layer scaffold All scaffolds had higher porosity, which ranged from 65% to 92%, and the porosity of COL/CS/0.5SF could reach 92% indicating it had a more optimal interconnected porous structure. With the increase of SF content, the water adsorption and swelling ratio decreased gradually. COL/CS/0.5SF scaffold exhibited the highest compressive strength (29.24 ± 0.10 KPa) amongst the four scaffolds, and there were significant differences. The COL/CS/0.5SF/TPHNs scaffold showed the optimal properties via the evaluation of the physicochemical characterization, mechanical properties and biocompatibility. Fabrication and characterization of the duallayer scaffold SEM of the dual-layer scaffold The SEM morphology of biomimetic oriented cartilage scaffold is shown in Figure 2. The surface layer and the transition layer were cemented by 1% SF as shown. This microstructure was completely consistent with the original design of natural cartilage layer structure. The surface layer is a porous structure through each other, and the transitional layer is a directional microtubule array structure. It can be seen that the aperture of surface layer was significantly larger than that of the layer in transition. It is good for autologous BMSCs to move from the bone marrow cavity along the wall of the microtubule to the surface layer and induce the BMSCs to differentiate into chondrocytes. Mechanical testing It is important for scaffold to withstand stress during culturing in vitro and as implants in vivo. Mechanical properties also influence specific cell functions within the engineered tissues. During the test, the samples were all compressed to 70% strain. As shown in Figure 3(A), there were three stages in the linear stress-strain curves of dry and wet scaffolds, which were a linear elastic stage (strain < 5%.), a steady collapse plateau stage (15% < strain < 45%), and a sharply increasing densification stage (strain > 50%) , which matches the typical stress-strain curves of cellular solids under compression. Figure 3(B) showed compressive strength and elastic modulus of dry and wet biomimetic cartilage scaffolds. Compared with dry biomimetic cartilage scaffolds (0.033 MPa), wet biomimetic cartilage scaffolds (0.051 MPa) has higher compression strength and has extremely significant difference (p < .001). The compressive elastic modulus of wet biomimetic cartilage scaffolds is only 1.2 KPa, which is still a little different from normal cartilage (0.1 MPa). Cell proliferation and viability BMSCs were used to test the biocompatibility of biomimetic cartilage scaffolds. Figure 4 showed the proliferation of BMSCs were cultured in a mixture of culture medium and half of leach liquor of surface layer scaffolds, transitional layer scaffolds and biomimetic cartilage scaffolds. For three kinds of scaffolds, there have significant differences (p < .05) on the first day. This may be due to the relatively few cells adhering to the transitional layer. The cells grew faster in the transitional layer between 1 and 3 day. The scaffolds considerably increased on the 5th day because cells need an environmental adaptation period within the first 3 days. Compared with surface layer scaffolds and transitional layer scaffolds, cells exhibited best proliferation rate in biomimetic cartilage scaffolds on the 7 days. Proliferation results suggested that biomimetic cartilage scaffolds could enhance the cells proliferation. BMSCs were used to test the biocompatibility of scaffolds through AO-EB double staining after these BMSCs were cultured in a mixture of culture medium and half of leach liquor of surface layer scaffolds, transitional layer scaffolds and biomimetic cartilage scaffolds for 7 days ( Figure 5). AO-EB double staining results showed that nearly all of BMSCs were bright green and showed better fusiform cell morphology, and the integrity of cell nucleus was maintained. The number of cells in biomimetic cartilage scaffolds was significantly more than the other two groups after 3 days. Macroscopic observation A cartilage defect model in rabbits was used to evaluate the repair capability for surface layer scaffolds, transitional layer scaffolds and biomimetic cartilage scaffolds (Figure 6). At postoperative 4 weeks, the indentation in the articular cartilage defect is still present, but new tissue begins to appear in the transitional layer and biomimetic cartilage scaffolds groups. A few of the regeneration cartilage appeared after 8 weeks, but there were more osteophytes at the joint edge. The tissue transparency and surface smoothness in the biomimetic cartilage scaffold group were superior to other groups after 12 weeks. At postoperative 16 weeks, in gross observation, articular surface of biomimetic cartilage scaffold group was smooth. Obviously, compared with the other two groups, the bionic oriented cartilage scaffold group with two factors showed the best regeneration effect. Histological staining analyses All of the New Zealand white rabbits survived without any obvious immunological or infectious complications during the experiment. The H&E staining of regenerated cartilages in four groups is shown in Figure 7. H&E staining showed welldefined construct of chondrocytes and cell aggregation. Compared with 4, 8 weeks in vivo results, significant increase in cell number and higher staining intensity were observed in the biomimetic cartilage scaffold group of 12 weeks. The biomimetic cartilage scaffold group formed mature regenerated cartilage similar to the normal tissue with uniformly aligned chondrocytes after 16 weeks. The cartilage defects in the control group were still evident. A few of the regeneration cartilage appeared in surface layer scaffold group and transitional layer scaffold group, but the distribution of cells was irregular, and a significant difference between the biomimetic cartilage scaffold group and normal tissue still existed. There was no doubt that the biomimetic cartilage scaffold group showed the most satisfactory effect compared to all other groups. The blue dye shown in alcian blue staining detects all cartilaginous tissues (Figure 8). A few of the proteoglycan composition appeared before 8 weeks, but the distribution of staining was irregular. Compared with other components, the proteoglycans in the biomimetic cartilage scaffold group were significantly and neatly distributed at 16 weeks. This result indicated that the biomimetic cartilage scaffold group exhibited the greatest amount of hyaline-like cartilage and had a stimulatory effect on cartilage defect repair. In the results of toluidine blue staining, cartilage and osteoblasts were purple and blue. As shown in Figure 9, it can be clearly seen that toluidine blue staining intensity will be higher with the increase of the content of glucosamine polysaccharide in the cartilage extracellular matrix. From 4 to 16 weeks, the staining results of either group were significantly deepened. Toluidine blue staining was the deepest and the surface was relatively smooth in the bionic oriented cartilage bracket group. After implantation for 16 weeks, in the biomimetic cartilage scaffold group, a layer of cartilage-like tissue was formed at the surface of defects, which was well connected to the subchondral bone. The red dye of safranin O, colouring all nuclei in red, indicates the proportion of proteoglycan content in the cartilage ( Figure 10). An obvious defect appeared in the control group, and the Safranin O staining was pale in colour. The staining of the transitional layer showed that there were more chondrocytes on the surface, but there were longitudinal cracks and the surface was rough after implantation for 16 weeks. After implantation for 16 weeks, all components had some degree of repair. In comparison with the other group, the biomimetic cartilage scaffolds group had a stimulatory effect on cartilage defect repair. Histologic evaluation A quantitative scoring for the histological staining was further performed ( Figure 11). As for the evaluation of overall tissue, the biomimetic cartilage scaffold group showed the highest scores at the 4th-week post-implantation (p < .05). Among all the component scores, histological evaluation results of the biomimetic cartilage scaffold group were superior to other groups and had significant differences (p < .05). Therefore, these results suggested that the biomimetic cartilage scaffold group was able to support stability repair of cartilage defect. Discussion Biomimetic cartilage scaffolds have been extensively used for the treatment of cartilage damage. Previous studied indicated that natural or synthetic scaffolds [9][10][11][12] could support the development of cartilage tissue. We prepared a dual-layer biomimetic cartilage scaffold by mimicking the structural design, chemical cues and mechanical characteristics of mature articular cartilage. To further enhance the chondroinductivity of the scaffold, KGN and TGF-b1 were used to functionalize the scaffold. COL/CS/0.1HAS/MPs scaffold was selected as the surface layer of the biomimetic cartilage scaffold, and COL/CS/0.5SF/TPHNs scaffold was selected as the transition layer of the biomimetic cartilage scaffolds. The results of in vitro biocompatibility and AO/EB staining revealed that the biomimetic cartilage scaffold could significantly enhance cell proliferation. It indicates that its function was related to the synergistic effect of KGN and TGF-b1. Previous study indicated that growth factors promoted superior chondrogenesis [6,35]. The results of in vivo animal experiments suggested that the defects of the biomimetic cartilage scaffold group had been completely filled, the boundary between new cartilage and surrounding tissue was difficult to identify, and the morphology of cells in repair tissue was almost in accordance with the normal cartilage after 16 weeks. All those results indicated that the biomimetic cartilage scaffold guide the morphology, orientation, and proliferation and differentiation of BMSCs. The results were related to the physical and chemical properties of the materials and different structures and orientations of the composite. In this study, seed cells repaired with articular cartilage defects in the KOA model may be derived from endogenous BMSCs. First, the cells moved from the broken subchondral bone into the implantation of the transitional layer material. Then, it reaches the surface layer because of TGF-b1could promote adhesion and proliferation to endogenous BMSCs. Finally, the effect of KGN on the proliferation and differentiation of BMSCs could achieve the effective repair of new cartilage tissue in the defect area. Ultimately, the proposed treatment can be applied in cartilage repair engineering. Conclusions We prepared a dual-layer biomimetic cartilage scaffold to fabricate an excellent biomaterial for cartilage repair and ultimately treat KOA. Compared with the surface layer scaffold and the transition layer scaffold, the dual-layer biomimetic cartilage scaffold could optimally promote cell adherence and proliferation. We further studied the chondroinductive and cartilage regeneration ability of the biomimetic cartilage scaffold in vivo. We concluded that the biomimetic cartilage scaffold could achieve the effective repair of new cartilage tissue in the defect area. Thus, taking all these biological assays into account, the biomimetic cartilage scaffold could become a promising candidate for tissue engineering. Disclosure statement No potential conflict of interest was reported by the authors.
5,493.6
2019-05-07T00:00:00.000
[ "Medicine", "Engineering", "Materials Science" ]
A Dual-Branch Speech Enhancement Model with Harmonic Repair : Recent speech enhancement studies have mostly focused on completely separating noise from human voices. Due to the lack of specific structures for harmonic fitting in previous studies and the limitations of the traditional convolutional receptive field, there is an inevitable decline in the auditory quality of the enhanced speech, leading to a decrease in the performance of subsequent tasks such as speech recognition and speaker identification. To address these problems, this paper proposes a Harmonic Repair Large Frame enhancement model, called HRLF-Net, that uses a harmonic repair network for denoising, followed by a real-imaginary dual branch structure for restoration. This approach fully utilizes the harmonic overtones to match the original harmonic distribution of speech. In the subsequent branch process, it restores the speech to specifically optimize its auditory quality to the human ear. Experiments show that under HRLF-Net, the intelligibility and quality of speech are significantly improved, and harmonic information is effectively restored. Introduction In both real-world production and living scenarios, as well as modern communication devices, interference with audio signals is inevitable.Part of the interference originates directly from the real-world environments where voice information is collected, and part arises from signal degradation during compression, transmission, and sampling in electronic devices.This phenomenon is referred to as voice degradation.Speech enhancement technologies aim to remove background noise from audio as much as possible while retaining the original speech information.Traditional speech enhancement methods generally work based on statistical signal principles, such as spectral subtraction [1], minimum mean square error estimation [2], filtering methods including Wiener filter [3] and Kalman filter [4], and subspace enhancement methods that use cross-spectral pairs for frequency filtering of subspace signals [5].However, these traditional methods often struggle to effectively reduce noise, especially in the presence of multiple noise sources or when the noise frequency range is concentrated.To devise a more versatile filtering method, S. P. Talebi proposes an approach based on fractional calculus [6], aiming to address setting α-stable statistics more effectively, which provides an alternative solution to the requirements of modern filtering applications. Currently, the mainstream speech enhancement methods based on deep learning follow two technical approaches.One is time-domain-based, utilizing neural networks to directly infer the spectrum of pure speech from noisy speech, which may produce better harmonic results but require more computational resources and may be less effective in suppressing non-stationary noise compared to time-frequency domain methods [7].Time-domain methods for waveform processing can significantly improve the Signal-to-Distortion Ratio (SDR) [8], but they may lead to a decrease in auditory perception.The Appl.Sci.2024, 14, 1645 2 of 13 primary reason for this issue is that the system, working in only one transform domain, struggles to filter out redundant information in the background noise.The other approach is frequency-domain-based, typically using masking techniques.The basic idea is to combine speech and noise signals in a certain way so that the predicted mask can accurately separate speech and noise signals.Complex Ideal Ratio Mask (CIRM), based on Fourier transformation and the concept of a crude ideal ratio mask, not only considers amplitude information but also phase information to preserve the phase information of the original speech signal and avoid signal distortion due to amplitude changes [9]. Previous research often underestimates the importance of phase information in speech repair, leading to unavoidable speech distortion in denoised speech, which significantly interferes with subsequent speech recognition and speaker recognition tasks, reducing their performances.Hu et al. demonstrate [10] that better utilization of phase information in speech signals can significantly improve the quality of enhanced speech, achieving better performances with less loss.Direct estimation of phase information in spectrograms is challenging, often resulting in large neural networks [11].To allow the phase information of the speech to play a greater role in the denoising process, researchers have made a considerable amount of effort.Inspired by the Taylor series, Li [12] and others propose a decoupled speech enhancement framework, dividing the optimization problem of the complex spectrum into two parts: the optimization problem of the magnitude spectrum and the estimation of complex residues.To refine the phase distribution, they define the difference between the rough spectrum and the target spectrum to measure the phase gap.A dual-branch enhancement network is introduced in [13], where the complex spectrum refinement branch collaboratively estimates the amplitude and phase information of speech by taking in both the real and imaginary parts.In the work of [14], a dedicated path encoder-decoder is designed to restore phase information and generate the phase spectrum for predicting speech.Experimental results have shown that the neural network's receptive field significantly affects the efficiency of model parameter utilization.Therefore, expanding the model's receptive field to be more sensitive to contextual information can achieve better phase understanding.Additionally, processing amplitude and phase spectra information as separate branches in neural networks can better utilize phase information in speech signals, offering better interpretability. Therefore, in this paper, we propose the harmonic repair large frame enhancement model, HRLF-Net, which is a dual-branch speech enhancement model designed with specialized modules to predict the harmonic distribution of speech.In the real-part branch of the network, we utilize fast Fourier convolutional operators instead of traditional 2D convolutions for amplitude spectrum repair, which effectively expands the model's receptive field and significantly improves the performance of speech harmonics.An architecture with dilated DenseNet and deconvolution blocks is deployed in the imaginary branch to fully utilize speech phase information while preserving the temporal characteristics of the speech signal, making the enhanced speech more accurately reflect the dynamic changes of the original speech.HRLF-Net is tested on two public datasets, VoiceBank + DEMAND [15] and DNS Challenge 2020 [16].Experimental results show that it outperforms most existing models and achieves state-of-the-art results in terms of Perceptual Evaluation of Speech Quality (PESQ). Proposed Methods This article primarily addresses the issue of standard single-channel speech enhancement, aiming to construct a neural network whose target is to fit CIRM that transforms waveforms with additive noise into pure speech waveforms.The following sections will provide detailed descriptions of the key components and the overall composition of the model. Fast Fourier Convolution (FFC) The Fast Fourier Transform (FFT) converts time-domain signals into frequency-domain signals.Compared to the Short-Time Fourier Transform (STFT), which decomposes timedomain signals into spectral components of a series of window functions, FFT first decomposes the signal into a sum of sine and cosine functions to represent the spectrum, thus efficiently computing the Discrete Fourier Transform and obtaining the spectral information of the signal. For traditional fully convolutional models, the growth of the effective receptive field is too slow, and the lack of an effective context-capturing structure often results in suboptimal enhancement effects, a problem that is more prominent in wideband, long-duration audio.In the amplitude spectrogram of speech, harmonic structures often form periodic patterns, a feature that is suitable for processing with FFC, i.e., repetitive microstructures.Considering the model's aim to expand the neural network's receptive field for speech context, using FFC is more appropriate for analyzing the entire speech spectrum.It is a widely used non-global operator in the field of Computer Vision (CV) and can replace traditional convolution layers in network architectures, playing a significant role in repairing damaged periodic backgrounds. In computer vision research, the Fourier transform generally applies a complex twostage method.In this work, we set the working domain of the Fourier transform as the frequency component of the feature map.The basic structure of FFC is shown in Figure 1.Specifically, the basic structure of FFC is implemented as follows. Appl.Sci.2024, 14, x FOR PEER REVIEW 3 of 13 provide detailed descriptions of the key components and the overall composition of the model. Fast Fourier Convolution (FFC) The Fast Fourier Transform (FFT) converts time-domain signals into frequency-domain signals.Compared to the Short-Time Fourier Transform (STFT), which decomposes time-domain signals into spectral components of a series of window functions, FFT first decomposes the signal into a sum of sine and cosine functions to represent the spectrum, thus efficiently computing the Discrete Fourier Transform and obtaining the spectral information of the signal. For traditional fully convolutional models, the growth of the effective receptive field is too slow, and the lack of an effective context-capturing structure often results in suboptimal enhancement effects, a problem that is more prominent in wideband, long-duration audio.In the amplitude spectrogram of speech, harmonic structures often form periodic patterns, a feature that is suitable for processing with FFC, i.e., repetitive microstructures.Considering the model's aim to expand the neural network's receptive field for speech context, using FFC is more appropriate for analyzing the entire speech spectrum.It is a widely used non-global operator in the field of Computer Vision (CV) and can replace traditional convolution layers in network architectures, playing a significant role in repairing damaged periodic backgrounds. In computer vision research, the Fourier transform generally applies a complex twostage method.In this work, we set the working domain of the Fourier transform as the frequency component of the feature map.The basic structure of FFC is shown in Figure 1.Specifically, the basic structure of FFC is implemented as follows. • Before the signal S enters the operator, it is divided along the feature channels of the feature map into the local block Sl and the global block Sg, where the local block Sl will use adjacent local blocks as learning objects, and the remaining global block Sg is used to obtain speech context associations.We use parameter α to control the division ratio of channels. • To cover the entire spectrum with the receptive field of the global block, the original feature space is transformed into a specific domain on the global Fourier unit, and after the spectral data is updated, it is restored to a spatial format.Meanwhile, additional segmentation and combination are performed in the local Fourier unit to make • Before the signal S enters the operator, it is divided along the feature channels of the feature map into the local block S l and the global block Sg, where the local block S l will use adjacent local blocks as learning objects, and the remaining global block Sg is used to obtain speech context associations.We use parameter α to control the division ratio of channels.• To cover the entire spectrum with the receptive field of the global block, the original feature space is transformed into a specific domain on the global Fourier unit, and after the spectral data is updated, it is restored to a spatial format.Meanwhile, additional segmentation and combination are performed in the local Fourier unit to make it more sensitive to spectral detail features.Finally, the output data of the two units are connected using a residual connection.• The results of the global and local blocks are simply connected to form the output of the complete operator.At this point, the entire FFC module is fully differentiable and can replace all traditional convolutions. We apply a real one-dimensional fast Fourier transform on the frequency dimension of the input feature map at the global component level and then concatenate the real and imaginary parts of the spectrum along the channel dimension.Next, we apply convolutional blocks on the frequency domain and finally restore the spatial structure using inverse FFT. The Harmonic Repair Module Although speech signals can be extensively damaged due to noise, the harmonic parts usually reside in higher energy regions and are not completely masked.Since deep learning models prioritize fitting high-energy and more robust (prominent) harmonic structures due to gradient descent and convergence [17], harmonic waveforms exhibit significant comb-like features, meaning that even if part of them are damaged by noise, the remaining parts contain information that can infer the original harmonic distribution.To model harmonic data in the spectrum, the model uses a harmonic-to-fundamental frequency transformation matrix Q [18], which calculates the corresponding harmonic distribution using the predicted fundamental frequency.The input X P ∈ R T×F , after convolution energy normalization, produces a query-key matrix K. Matrix multiplication between K and Q and the application of the sigmoid method obtains a confidence vector for the pitch of the fundamental frequency, indicating the likelihood of each candidate value corresponding to the pitch. The harmonic repair module uses high-resolution comb tooth spacing to infer the damaged harmonic distribution and fine-tunes the result using convolution.Figure 2 shows the structure of the harmonic repair module.Unlike the traditional attention mechanism, which calculates attention weights using query vectors and key-value pairs, the harmonic repair mechanism calculates and repairs harmonic information based on the spectral and harmonic-pitch converter, using a residual connection [19] to mitigate the vanishing gradient problem.The locality of convolution is the fundamental guarantee for harmonic modeling, so the module retains the spectral structure even after processing [20]. it more sensitive to spectral detail features.Finally, the output data of the two units are connected using a residual connection. • The results of the global and local blocks are simply connected to form the output of the complete operator.At this point, the entire FFC module is fully differentiable and can replace all traditional convolutions. We apply a real one-dimensional fast Fourier transform on the frequency dimension of the input feature map at the global component level and then concatenate the real and imaginary parts of the spectrum along the channel dimension.Next, we apply convolutional blocks on the frequency domain and finally restore the spatial structure using inverse FFT. The Harmonic Repair Module Although speech signals can be extensively damaged due to noise, the harmonic parts usually reside in higher energy regions and are not completely masked.Since deep learning models prioritize fitting high-energy and more robust (prominent) harmonic structures due to gradient descent and convergence [17], harmonic waveforms exhibit significant comb-like features, meaning that even if part of them are damaged by noise, the remaining parts contain information that can infer the original harmonic distribution.To model harmonic data in the spectrum, the model uses a harmonic-to-fundamental frequency transformation matrix Q [18], which calculates the corresponding harmonic distribution using the predicted fundamental frequency.The input ∈ ℝ , after convolution energy normalization, produces a query-key matrix .Matrix multiplication between and and the application of the sigmoid method obtains a confidence vector for the pitch of the fundamental frequency, indicating the likelihood of each candidate value corresponding to the pitch. The harmonic repair module uses high-resolution comb tooth spacing to infer the damaged harmonic distribution and fine-tunes the result using convolution.Figure 2 shows the structure of the harmonic repair module.Unlike the traditional attention mechanism, which calculates attention weights using query vectors and key-value pairs, the harmonic repair mechanism calculates and repairs harmonic information based on the spectral and harmonic-pitch converter, using a residual connection [19] to mitigate the vanishing gradient problem.The locality of convolution is the fundamental guarantee for harmonic modeling, so the module retains the spectral structure even after processing [20].The harmonic distribution H can be represented as so f tmax sig K•Q T .X P and H are rectified using one-dimensional convolution, and then H is applied to X P for elementwise multiplication, and the result is convolved to output X out .Using the harmonic repair module can effectively repair and restore voice-focused frequency bands in audio and suppress the impact of harmonic-like noise in noisy speech on the enhancement results. The Harmonic Fading-Out Module Due to the absence of channel interaction in the harmonic repair module, there is redundancy in the restoration process of mid-to-high frequency harmonics, negatively impacting the fidelity of speech timbre restoration.The harmonic fading-out module extracts spectral information from multiple angles and filters out potentially over-restored comb-like waveforms.It connects the harmonic repair module to two stacked multi-head attention [21] modules, one unfolding along the channel dimension and the other along the frequency dimension. As illustrated in Figure 3, this module first reshapes the input data X into R C×F×L and uses three linear layers to obtain the Q, K, and V keys.L and F, respectively, represent the number of time frames and frequency bins.After rectification via Scale, Qc and Kc are multiplied and then combined with Vc, and finally, the result is concatenated with Xc to form a residual connection.The output is then used as the input for the next layer of the frequency attention module.The frequency attention also operates in a similar structure, ultimately producing the output of this module X ∈ R C×L×F . and suppress the impact of harmonic-like noise in noisy speech on the enhancement results. The Harmonic Fading-Out Module Due to the absence of channel interaction in the harmonic repair module, there is redundancy in the restoration process of mid-to-high frequency harmonics, negatively impacting the fidelity of speech timbre restoration.The harmonic fading-out module extracts spectral information from multiple angles and filters out potentially over-restored comb-like waveforms.It connects the harmonic repair module to two stacked multi-head attention [21] modules, one unfolding along the channel dimension and the other along the frequency dimension. As illustrated in Figure 3, this module first reshapes the input data X into ℝ and uses three linear layers to obtain the Q, K, and V keys.L and F, respectively, represent the number of time frames and frequency bins.After rectification via Scale, Qc and Kc are multiplied and then combined with Vc, and finally, the result is concatenated with Xc to form a residual connection.The output is then used as the input for the next layer of the frequency attention module.The frequency attention also operates in a similar structure, ultimately producing the output of this module ∈ ℝ . Time Series Modeling To avoid the issues caused by the 3S paradigm adopted in the recent Continuous Speech Separation (CSS) systems [22][23][24], such as the increased computational burden due to multiple overlaps between windows, and the dilemma of choosing window length for the performance and stitching stability, for time series modeling, the enhancement network employs Long Short-Term Memory (LSTM) layers with memory skipping to capture the contextual information of speech [25].This layer is an improvement on LSTM [26], and the traditional LSTM mapping function can be represented as Time Series Modeling To avoid the issues caused by the 3S paradigm adopted in the recent Continuous Speech Separation (CSS) systems [22][23][24], such as the increased computational burden due to multiple overlaps between windows, and the dilemma of choosing window length for the performance and stitching stability, for time series modeling, the enhancement network employs Long Short-Term Memory (LSTM) layers with memory skipping to capture the contextual information of speech [25].This layer is an improvement on LSTM [26], and the traditional LSTM mapping function can be represented as Ŵ, ĉ, ĥ = LSTM(W, c, h), where W ∈ R T×N is the input sequence, and c and h are the initialized cell state and hidden state, respectively.On the left side of the equation, Ŵ represents the output sequence, while ĉ and ĥ are the updated cell state and updated hidden state, respectively.Generally, it is believed that ĉ encodes the entire long-term memory sequence to form long-term memory, while ĥ is used for short-term memory of the processed sequence. In the time domain task of speech, T in the input sequence W ∈ R T×N can usually take a very large value.W can be divided into several smaller segments w 1 l , w 2 l , • • • , w S l , where S is the total number of segments, and the length of each segment is determined using the parameter K, with W s l = W[sK − K : sk, :] ∈ R K×N .The Skipping-Memory LSTM, which consists of L basic layers connected in series, is shown in Figure 4.In each layer structure, S seg-LSTMs are used to process the S small segments w 1 l , w 2 l , • • • , w S l , where l represents the l-th input.At this point, the mapping function of the seg-LSTM can be expressed as and where LN is the layer normalization operation in the residual connection, and ĉs 1 , ĥs 1 are all initialized to 0. All the segments are finally collected in the Mem-layer for global modeling to conduct cross-segment processing.The memory-skipping LSTM constructed in this way can handle relatively long speech sequences.By purposefully discarding the overlapping areas of adjacent segments, it achieves an effective balance between performance and efficiency. Generally, it is believed that ̂ encodes the entire long-term memory sequence to form long-term memory, while ℎ is used for short-term memory of the processed sequence. In the time domain task of speech, T in the input sequence ∈ ℝ can usually take a very large value.W can be divided into several smaller segments [ , , ⋯ , ], where is the total number of segments, and the length of each segment is determined using the parameter K, with [ − : , : ] ∈ ℝ .The Skipping-Memory LSTM, which consists of L basic layers connected in series, is shown in Figure 4.In each layer structure, seg-LSTMs are used to process the small segments [ , , ⋯ , ], where l represents the l-th input.At this point, the mapping function of the seg-LSTM can be expressed as 𝑊 , 𝑐 , ℎ Seg_LSTM , ̂ , ℎ and where LN is the layer normalization operation in the residual connection, and ̂ , ℎ are all initialized to 0. All the segments are finally collected in the Mem-layer for global modeling to conduct cross-segment processing.The memory-skipping LSTM constructed in this way can handle relatively long speech sequences.By purposefully discarding the overlapping areas of adjacent segments, it achieves an effective balance between performance and efficiency. The Harmonic Repair Large Frame Enhancement Model HRLF-Net Under normal circumstances, the noise in noisy speech is non-additive, and the noisy speech ∈ ℝ can be represented as + , where s is the pure speech signal desired in the task, n is the noise, and p is the total number of samples in the signal.The task of speech enhancement is to separate s from y as much as possible while suppressing the noise as much as possible. The input to the proposed model is the complex spectrum after STFT, represented as + , where Y, S and N respectively represent the complex spectra of the noisy speech, pure speech, and noise signal, with , , ∈ ℝ .T and F represent the number of bins in the time and frequency dimensions in the real and imaginary parts, respectively.Figure 5 shows the overall architecture of our proposed HRLF-Net. The Harmonic Repair Large Frame Enhancement Model HRLF-Net Under normal circumstances, the noise in noisy speech is non-additive, and the noisy speech y ∈ R p×l can be represented as y = s + n, where s is the pure speech signal desired in the task, n is the noise, and p is the total number of samples in the signal.The task of speech enhancement is to separate s from y as much as possible while suppressing the noise as much as possible. The input to the proposed model is the complex spectrum after STFT, represented as Y = S + N, where Y, S and N respectively represent the complex spectra of the noisy speech, pure speech, and noise signal, with {Y, S, N} ∈ R 2T×F .T and F represent the number of bins in the time and frequency dimensions in the real and imaginary parts, respectively.Figure 5 shows the overall architecture of our proposed HRLF-Net.Several harmonic repair modules are used to extract and refine features, and harmonic fading-out modules, represented as HFO in Figure 5, are applied after each processing to prune the processed features to a certain extent.Harmonic fading-out modules are used concurrently with each harmonic repair module for optimal filtering.Several harmonic repair modules are used to extract and refine features, and harmonic fading-out modules, represented as HFO in Figure 5, are applied after each processing to prune the processed features to a certain extent.Harmonic fading-out modules are used concurrently with each harmonic repair module for optimal filtering. Real and Imag branches correspond to the real and imaginary parts of the CIRM mask structure and are processed using two branch networks.In the real branch, convolutional layers with causal 2D convolution, batch normalization [27], and PReLU [28] preliminarily filter the input signal and separate different channels, which is an autoencoder design [29], followed by cascaded FFC modules forming residual connections.The resulting output is up sampled to obtain the predicted CIRM spectrum's real part output M r .To avoid issues of non-structural and wrapping phase jumps [30], the Imag branch employs a phase decoder with dilated DenseNet [31].After the deconvolution block, parallel dual 2D convolution layers output pseudo-components, and a dual-parameter arctangent function activates these two components to obtain the predicted CIRM imaginary spectrum output M i , where instance normalization layers are connected to standardize the network's intermediate features.This structure is based on the definition of the complex ideal ratio mask. and where Y r and Y i respectively represent the real and imaginary parts of the noisy complex spectrum.S r and S i represent the real and imaginary parts of the pure speech complex spectrum.The masks calculated using M r and M i , estimated from the noisy speech, can be represented using the following formula: Multiplying the noisy speech spectrum Y = Y r + Y i with the mask M = M γ + jM i results in the enhanced speech spectrum, which is then transformed back into the final waveform (W) using the Inverse Short-Time Fourier Transform (ISTFT): Loss Functions We use a multilayer loss function to help the network effectively train and fit [28].The time loss L T is calculated by computing the L1 norm difference between the clean speech waveform and the model's output-enhanced waveform E x, x [∥x − x∥ 1 ], with the time loss result being the average of the differences across all frames.The magnitude loss L M is computed by first extracting the magnitude spectra X m and X m from the clean speech and the enhanced speech, respectively, and then calculating the Mean Squared Error (MSE) between them E X m , Xm X m − Xm 2 2 .The final loss value is the expected MSE across all frequency and time points.The function composition also includes the complex spectrum loss L C , which uses the ground truth and predicted complex spectra as inputs and calculates the MSE between the two spectral graphs E X r , Xr X r − Xr Experiments In this section, the datasets and performance evaluation metrics used in our experiments are firstly introduced.Then, the effectiveness of the proposed modules is validated in subsequent experiments, demonstrating overall performance superiority across several metrics compared with a series of strong baselines. Datasets and Experimental Setup DNS Challenge: To maintain consistency with the evaluation datasets of mainstream models, in our experiments, we utilize the ICASSP 2020 DEEP NOISE SUPPRESSION CHALLENGE (DNS Challenge 2020) dataset for training and evaluation.The dataset includes clean speech predominantly in English, from 2150 speakers selected from over ten thousand individuals, totaling over 500 h, as well as 60,000 noise segments from 150 different noise categories.During model training, clean speech and noise are randomly selected from their respective sets and dynamically combined to simulate noisy speech.The Signal-to-Noise Ratio (SNR) is uniformly distributed between −5 dB and 20 dB.The training and validation sets are split in a 4:1 ratio and are completely isolated from the test set. VoiceBank + DEMAND: This public dataset includes paired noisy and clean speech clips.The clean speech audio segments are sourced from the corpus, which contains 28 different speakers, 11,518 audio segments, and over 800 speech segments for the test set.Clean speech segments are mixed with ten different types of noise at SNRs of 0 dB, 5 dB, 10 dB, and 15 dB.The test set includes DEMAND database materials at SNRs of 2.5 dB, 7.5 dB, 12.5 dB, and 17.5 dB. The model input uses a Hanning window with a length of 20 ms and 50% overlap, along with an STFT of 320 points, to produce 161-dimensional spectral features.The size and stride of the convolutional kernel are (2,3) and (1,1), respectively, and the number of heads in the harmonic integration of the harmonic repair is set to 4. The channel numbers of the harmonic repair module are {12, 24, 24, 48, 48, 24, 12, 12}, where the first six are for the main structure, and the last two belong to the compensation components.The SkiM layer includes four basic SkiM blocks, with the hidden dimension of the LSTM set to 256.The feature-length segment size S for Seg-LSTM is set to 150, where only layer normalization is performed in the feature dimension in the causal SkiM.In the FFC module, the channel ratio used in the global branch is α, which equals to 0.75.The imaginary part branch applies four convolutional layers with dilation sizes of 1, 2, 4, and 8 in the expanded DenseNet sequentially.The model undergoes 80,000 iterations of training, with a batch size of eight.The Adam optimizer is employed, and the learning rate is 0.0001. Evaluation Metrics PESQ: Perceptual Evaluation of Speech Quality is a standardized objective metric used to assess the quality of speech signals.Initially developed by the International Telecommunication Union (ITU), it was primarily designed for evaluating speech quality within telephony systems.PESQ aims to provide an objective quantification of speech quality by simulating aspects of the human auditory system's response to audio signals. STOI: Short-Time Objective Intelligibility is an objective metric designed to assess the intelligibility of speech signals.Similar to PESQ, STOI aims to provide an objective and quantitative method for measuring the understandability of speech signals.This metric primarily focuses on the intelligibility of speech signals in noisy environments and is applicable to speech communication, speech recognition, and other applications that involve conveying speech information in the presence of background noise.SI-SDR: Scale-Invariant Signal-to-Distortion Ratio is an objective metric used to measure the quality of separating audio sources.It is primarily employed to assess the performance of separation algorithms in audio source separation tasks.SI-SDR considers a balance between the scale-invariant proportion between the estimated and true source signals and the level of distortion, making it a scale-invariant measure.CSIG: Composite Speech Intelligibility Index is a metric used to assess the performance of speech processing systems, such as speech enhancement and speech coding.This metric is primarily employed to measure the intelligibility and quality of speech signals.The design of CSIG aims to provide a comprehensive evaluation that reflects the impact of speech processing systems on the original speech signal.It considers not only the clarity of the speech signal but also the combined effects of factors such as background noise and distortion. Parameter Selection Table 1 shows the experiments conducted on the DNS Challenge 2020 dataset for selecting a suitable parameter N, which is the number of paired Repair and HFO blocks.From Table 1, we see that as N starts to increase from 2, all metrics in the table show a significant improvement, with PESQ being the most obvious.When N exceeds 4, STOI does not show a noticeable increase, while other metrics begin to decline to different extents and contribute to an increase in the model parameters.Based on these results, in our following experiments, we set N to be 4.The optimal values of the objective evaluation metrics are highlighted in bold font. Ablation Studies This study, based on ablation experiments, analyzes the necessity of performance improvement for each module of the model network.It then evaluates the overall performance of the model based on PESQ, STOI, and SI-SDR.Finally, the comprehensive performance of the proposed method is verified.The ablation experiments are conducted on the DNS Challenge 2020 dataset and the VoiceBank + DEMAND dataset. The experiment results in Tables 2 and 3 show that using only the harmonic repair module and disabling the fading-out module leads to a significant decrease in speech quality.This is because the nonlinear operations of the harmonic repair module cause disturbances, especially in the recovery of harmonics and particularly in high-amplitude sections, amplifying these disturbances and ultimately affecting the enhancement performance.Pairing the fading-out module with harmonic repair significantly mitigates this issue.Replacing SkiM with naive LSTM demonstrates that SkiM achieves essentially the same performance level as naive LSTM while significantly reducing computational costs.Comparative experiments using the original 2D convolutional kernels for the real part branch's FFC module are conducted.These experiments show that thanks to FFC's advantage of having a large-scale receptive field, the context of speech is fully utilized in the recovery process of the real spectrogram.Unlike traditional convolution, where the receptive field is limited by the size of the kernel, FFC operates convolution in the frequency domain, considering all frequency components of the input simultaneously, which allows for better capture of long-term dependencies in the signal.This demonstrates the advantage of FFC's large-scale receptive field in fully utilizing the context of speech in the recovery process of the real spectrogram.To serve as a control, the effectiveness of FFC is studied using conventional 2D convolution substitution, where +/-indicates whether the submodule is masked in the experiments.To serve as a control, the effectiveness of FFC is studied using conventional 2D convolution substitution, where +/-indicates whether the submodule is masked in the experiments. Comparison with Other Models We compare the proposed model with other models on two datasets using the experimental results provided in the original papers.On the DNS Challenge dataset, as seen in Table 4, our model improved PESQ without significantly increasing the number of model parameters and maintained an advanced level in other objective metrics.On the Voicebank + DEMAND dataset, as shown in Table 5, except for slight inferiority to DEMUCS [30] in CSIG, our model similarly shows significant improvements in all metrics.Compared to other studies, the advantage of our proposed method lies in considering the crucial role of harmonic information under noise masking for speech restoration.When estimating the phase spectrum, we follow the complex spectrum calculation method separately from the real and imaginary parts.We achieve collaborative optimization of the phase spectrum and magnitude spectrum using multiple losses while also considering contextual information in the speech.* indicates that the corresponding result is not provided in the original paper, and the value in this table is obtained through our experiments. Conclusions In this paper, we proposed HRLF-Net, a dual-branch speech-enhanced network.The harmonic repair module used in the model significantly restores the harmonic distribution of the speech.In the subsequent imaginary and real dual-branch structure, the FFC module plays a key role in expanding the receptive field of the model, while the dilated Conclusions In this paper, we proposed HRLF-Net, a dual-branch speech-enhanced network.The harmonic repair module used in the model significantly restores the harmonic distribution of the speech.In the subsequent imaginary and real dual-branch structure, the FFC module plays a key role in expanding the receptive field of the model, while the dilated DenseNet effectively overcomes the phase wrapping and significantly improves the comprehensibility of the speech.The effectiveness and necessity of each module in the model are validated through ablation experiments.The proposed network shows a significant improvement in the PESQ metric compared to other state-of-the-art models, and it maintains a high level of short-term speech intelligibility. Furthermore, during the experiments, we observed significant variations in the dynamic range of speech signals due to differences in speakers, contexts, and recording conditions.Similarly, background noise ranges from low-intensity environmental noise to high-intensity interference signals such as traffic or industrial noise.Therefore, to further enhance the speech enhancement system's ability to handle dynamic range, we will explore additional data-driven approaches in the future.Also, we plan to extend our work to real-time applications by achieving even lower latency levels. Figure 1 . Figure 1.The basic structure of FFC. Figure 1 . Figure 1.The basic structure of FFC. Figure 2 . Figure 2. The structure of the harmonic repair module.Figure 2. The structure of the harmonic repair module. Figure 2 . Figure 2. The structure of the harmonic repair module.Figure 2. The structure of the harmonic repair module. Figure 3 . Figure 3.The structure of the harmonic fading-out module. Figure 3 . Figure 3.The structure of the harmonic fading-out module. Figure 6 Figure 6 displays the spectrograms before and after enhancement via our proposed model.The spectrogram comes from a randomly selected ten-second audio in the DNS Challenge 2020 set.The red rectangular boxes in the image highlight the pure noise sections that are almost devoid of speech.In these sections, the proposed model effectively suppresses the background noise, as can be seen in the comparative spectrograms below.The pure noise sections, containing no human voice, are almost completely silent after noise reduction.The white and black dashed boxes in this figure indicate the effectiveness of harmonic restoration.In the white dashed box, the harmonic features are nearly invisible due to noise masking.After enhancement, the corresponding section in the spectrogram shows distinct comb-like harmonics, which indicates that the proposed model significantly restores the harmonics of speech during the enhancement process.This plays a key role in improving the clarity and distinguishability of the speech. Figure 6 Figure 6 displays the spectrograms before and after enhancement via our proposed model.The spectrogram comes from a randomly selected ten-second audio in the DNS Challenge 2020 set.The red rectangular boxes in the image highlight the pure noise sections that are almost devoid of speech.In these sections, the proposed model effectively suppresses the background noise, as can be seen in the comparative spectrograms below.The pure noise sections, containing no human voice, are almost completely silent after noise reduction.The white and black dashed boxes in this figure indicate the effectiveness of harmonic restoration.In the white dashed box, the harmonic features are nearly invisible due to noise masking.After enhancement, the corresponding section in the spectrogram shows distinct comb-like harmonics, which indicates that the proposed model significantly restores the harmonics of speech during the enhancement process.This plays a key role in improving the clarity and distinguishability of the speech. Table 1 . Selection of the number of stacked blocks for Repair and HFO Blocks. Table 2 . Ablation experiments targeting the main submodules on the DNS Challenge 2020 dataset. Table 3 . Ablation experiments targeting the main submodules on the VoiceBank + DEMAND dataset. Table 4 . Comparison with other models on the DNS Challenge 2020 dataset.indicates that the corresponding result is not provided in the original paper, and the value in this table is obtained through our experiments. * Table 5 . Comparison with other models on the VoiceBank + DEMAND dataset.indicates that the corresponding result is not provided in the original paper, and the value in this table is obtained through our experiments. * Table 5 . Comparison with other models on the VoiceBank + DEMAND dataset.indicates that the corresponding result is not provided in the original paper, and the value in this table is obtained through our experiments. *
8,978.2
2024-02-18T00:00:00.000
[ "Computer Science" ]
Fabrication of ultrahigh-density nanowires by electrochemical nanolithography An approach has been developed to produce silver nanoparticles (AgNPs) rapidly on semiconductor wafers using electrochemical deposition. The closely packed AgNPs have a density of up to 1.4 × 1011 cm-2 with good size uniformity. AgNPs retain their shape and position on the substrate when used as nanomasks for producing ultrahigh-density vertical nanowire arrays with controllable size, making it a one-step nanolithography technique. We demonstrate this method on Si/SiGe multilayer superlattices using electrochemical nanopatterning and plasma etching to obtain high-density Si/SiGe multilayer superlattice nanowires. Introduction Low-dimensional systems are of high interest because their unique properties can improve device performance in a range of applications, including optics [1,2], mechanics [3], microelectronics [4], and magnetics [5]. These systems have enhanced surface and quantum confinement effects caused by the large surface-to-volume ratio and small size, making them dramatically different from their bulk counterparts. Superlattice nanowires have the potential to improve the performance of thermoelectronics [6][7][8][9], small sizes have lower thermal conductivity [8,9], and they can be made at a high density, thus providing improved performance. Generally, there are two major approaches in the fabrication of nanostructures: bottom-up [10] and top-down [11]. Among the various bottom-up methods, vaporliquid-solid (VLS) growth is one of the most popular and is used to grow nanostructures such as nanowires [12][13][14]. VLS growth uses a catalytic liquid-alloy phase that can rapidly adsorb a vapor to supersaturation levels, in which crystal growth can subsequently occur from nucleated seeds at the liquid-solid interface. It is a relatively simple method and yields a large quantity of nanowires from a single growth. However, the requirement of metal particle catalysts risks contaminating the nanowires [15], and it is not easy to control the density and nanowire size, shape, and crystal orientation simultaneously [16]. Additionally, twin boundaries normally form in the VLS growth, which may affect the subsequent nanowire performance [17]. Typically, top-down approaches involve lithography, which defines the lateral size and shape of the final structure using an electron/photon-sensitive resist as mask material. Examples are electron beam lithography [18] and X-ray nanolithography [19]. For example, Zhong et al. have reported ordered SiGe/Si superlattice pillars combining holographic lithography, molecular beam epitaxy (MBE) growth, and wet chemical etching [20]. Although e-beam and X-ray lithographies create uniformly distributed and ordered templates for further top-down processing, they are expensive and time consuming. They also require several processing steps, involving photoresist deposition/removal and chemical or ion beam etching. Other approaches utilize self-assembling [21] structures such as block copolymers [22] or anodic aluminum oxide as masks [13,23]. The outputs of self-assembling techniques are uniform in size and ordered over a large scale; however, they usually require additional deposition, baking, etching, and stripping processes. Instead of a patterned photoresist, it is also possible to use nanoparticles (NPs) as a nanolithography mask. NPs can be prepared by electrochemical deposition (ECD), an easy, fast, economical, and straightforward way to deposit materials directly on top of semiconductors [24] or metals [25][26][27]. To the best of our knowledge, ECD of NPs has not been reported in top-down semiconductor nanostructure fabrication. We deposit silver nanoparticles (AgNPs) in sizes of tens of nanometers, using pulsed-current driven ECD. By adjusting the deposition conditions, we achieve highdensity AgNPs with uniform size and spacing. The resulting one-step electrochemically deposited AgNPs are very robust and can survive further processing. Therefore, they can be used as a hard mask for plasma etching or as a metal-assisted etching mask [23]. By using this mask in combination with chemical vapor deposition (CVD) growth and plasma etching, we are able to fabricate ultrahigh-density (6.2 × 10 10 cm -2 ) Si/ SiGe superlattice nanowire arrays over a large area, with individual wires < 30 nm in diameter and approximately 200 nm in length. Experimental details The ECD system used in our experiment is schematically shown in Figure 1a. It consists of a function generator (Agilent 33220A, Agilent Technologies, Inc., Santa Clara, CA, USA), a voltage amplifier (Agilent 33502A, Agilent Technologies, Inc., Santa Clara, CA, USA), a glass beaker, Ag foil (99.9%, Aldrich, Sigma-Aldrich, St. Louis, MO, USA) as the anode, and the semiconductor substrate as the cathode. The Si/SiGe superlattice wafer is prepared using lowpressure CVD. We grow a ten-period Si/Si 0.82 Ge 0.18 superlattice structure on Si wafers (Si(001), p-type, nominal doping density of 10 15 cm -3 ), and cap the superlattice with Si ( Figure 1b). The layer thickness is 10.8 nm for Si and 7.0 nm for the SiGe alloy, as confirmed by Xray diffraction (XRD, PANalytical X'Pert MRD, PANalytical, Inc., Westborough, MA, USA). Both layers are grown at 580°C, with silane and germane as precursors. The root mean square surface roughness measured by atomic force microscopy (AFM Digital Instrument Nanoscope IV, Veeco Instruments, Santa Barbara, CA, USA) after CVD growth of all the layers is 0.7 nm. Before performing ECD, we dip the as-grown superlattice wafer in hydrofluoric acid to remove the native oxide on the top Si layer. After rinsing in deionized water (DI) for 5 min, we quickly immerse it into the AgNO 3 solution (1 × 10 -4 M). We use pulsed current as the deposition driving force to deposit nanoparticles because this approach is very controllable when depositing a small amount of material. The pulsed signal consists of a long period (T) with a short pulse length (τ). Various immersion times (t) and pulse lengths (τ = 1 ms to 0.5 s) were tried in the experiment in order to obtain AgNPs with uniform small size and high density. After AgNP deposition, plasma etching is performed to produce superlattice nanowires. The substrate, with the AgNPs acting as a hard mask, is etched by a high-density helicon plasma tool equipped with a diode laser interferometer for in situ etch depth measurement (Figure 1d) [28]. A gas mixture of SF 6 /C 2 H 2 F 4 is used. Source power and bias voltage are finely tuned to obtain a vertical etch profile. Using this system, we are able to etch out nanowires up to several microns in length [29]. conditions. In the experiment, we fix the pulse period T to be 1 s. The size and density of the AgNPs vary with the immersion time and pulse length. The pulse length τ is first fixed to 0.5 s in order to study the result for different immersion time t. We vary the immersion time from 10 s to 50 s in steps of 10 s. For t = 10 s, the average NP size is around 20 nm in diameter, with 14-17 nm NPs the most prevalent ( Figure 2a). The size distribution remains qualitatively the same with increasing immersion time up to 40 s, but the average and most prevalent sizes increase. For t = 20 s, the average size is 25.4 nm, and the most prevalent size range is 17-20 nm ( Figure 2b). Finally, nanoparticles with diameters of 29 ± 4 nm are achieved at an immersion time of 40 s, as shown in Figure 2d. When the immersion time reaches 50 s, however, the sizes become random and range from 25 to 37 nm (Figure 2e). Results and discussion For a much shorter pulse length, τ = 1 ms, the general trend of size distribution is the same, but particles are much smaller and the uniform size distribution lasts to longer immersion times. For comparison, the results of the 50-s immersion time are shown in Figure 2f. The applied electric field is the driving force of the Ag + reduction reaction. Each positive pulse applied on the electrode drives Ag + ions towards the cathode, here the Si surface. The pulse length of the pulse determines the number of adatoms arriving on the surface. Because the applied voltage (20 V) is much higher than the overpotential (300 mV), the effects of the space charge layer and the Helmholtz layer can be ignored when considering the potential profile across the substrate/solution interface [30]. Because of the weak chemical interaction between the adatoms and the substrate, nucleation and growth of Ag on Si occurs via a Volmer-Weber mode [31]. For a pulse with long pulse length (such as 0.5 s), the nucleation is an instantaneous process [31], and the growth is diffusion-limited [32]. In the first pulse, the nucleation density reaches its maximum during the first few milliseconds and passes the peak value thereafter, thus nucleus coarsening follows the nucleation in order to reduce the total free energy of the system. At the moment nucleation is completed, the ion-depleted layers (the solution layer near the surface where there are no ions; it is thinner than the diffusion layer) surrounding each nucleus are well separated from each other (Figure 3a). As a result, the growth of a nucleus is not influenced by the growth of neighboring particles. Therefore, in this "weak interparticle interaction" limit, particles grow at equal rates. However, reduction of ions to atoms causes the global ion concentration in the diffusion layer to decrease. The 0.5-s pulse-off time will allow the depleted layer to be repopulated by ions diffusing from far-off electrode regions. This reasoning explains why the particle density decreases with increasing immersion time (t ≤ 40 s) while still remaining quasi-uniform. The particle density decreases from 6.28 × 10 10 cm -2 in Figure 2a to 5 × 10 10 cm -2 in Figure 2b and stabilizes around 3.5 × 10 10 cm -2 in Figure 2c, d. The density reduction rate is slowed at longer immersion time because the solution ion concentration reduction increases the diffusion layer thickness. Low ion concentration or long immersion time can cause a transition from weak interparticle interaction to strong interaction. At long immersion times (t > 40 s in our experiment), depletion layers at adjacent particles merge to create an approximately planar diffusion layer across the entire surface ( Figure 3b); this strong interparticle interaction makes the flux of ions per unit area on the surface more uniform. Because the nucleation density is locally variable, densely nucleated regions are therefore expected to have a slower growth rate than regions of the same size but encompassing a smaller number of nanoparticles. Thus, when the immersion time reaches 50 s the size distribution becomes less uniform (Figure 2e). At the same time, Ostwald ripening decreases particle size uniformity as smaller islands are eliminated by the larger ones [33]. By reducing the pulse length, we achieve higher particle density and smaller particle size while still maintaining good size uniformity. In Figure 2f, the AgNPs have a density of 1.4 × 10 11 cm -2 , which is almost twice the density for τ = 0.5 s (Figure 2e), while the relative standard deviation of size is only 42%. The AgNPs are crown-shaped rather than hemispherical. If we assume that the deposited atoms are (almost) all forming nanoparticles and the distribution of sizes do not vary from what is observed in the SEM image, the total number N of deposited nanoparticles at equilibrium is given by [30]. where r is the radius of the NP, N A is Avogadro's constant, M 0 is the molar mass of Ag, g is the bulk Ag density, I is the measured current, and q 0 is the electron charge. From Equation 1, the diameter of AgNPs can be expressed as a function of immersion time and pulse length: If we assume the AgNPs are hemispherical, the percentage of the Si surface covered by AgNPs is: where n/s 0 is the particle density on the SEM image area. Inserting the experimental pulse length τ = 0.5 s into Equation 2 and Equation 3, we can plot the size of the particles and the coverage of the substrate as functions of the immersion time, as shown in Figure 4a,b, respectively. The most common particle size agrees with theory very well (Figure 4a). However, the average particle size is larger than the most common particle size, indicating variation in the shape of the AgNPs in the ECD. When the immersion time is short, the particles are crowns instead of hemispherical. As the immersion time increases, the height of the particle increases, and this shape change results in the experimental coverage getting closer to the theory curve, in which we assume the particles to be perfect hemispheres, as shown in Figure 4b. The NPs will finally turn into a continuous film if a long enough immersion time is used [30]. Using the AgNPs as a mask, vertically aligned superlattice nanowires are etched, as shown in Figure 5, which shows both scanning electron microscopy and transmission electron microscopy images. If a single etching time is too long, because sidewall etching occurs simultaneously, the top of the nanowires (closest to the plasma source) will be etched away and the nanowire will form a tapered structure (Figure 5c). This problem can be overcome by depositing a fluorocarbon film (using C 4 F 8 as precursor) on the sidewall during etching [34]. It protects the sidewall from being further etched, and it can be removed with an O 2 plasma or HF afterward. The details of deep reactive-ion etching (RIE) can be found in Ref. [35]. Using high-resolution transmission electron microscopy (HRTEM, Philips, CM200UT, Philips Electron Optics BV, Eindhoven, The Netherlands), we can explore the layered structure of the nanowires. Figure 5c, a relatively low-magnification image, shows the periodic variation in brightness representative of the layers. The darker regions are the SiGe alloy because Ge scatters electrons more strongly. We can clearly see the interfaces between Si and SiGe layers. Figure 5d demonstrates that the nanowires are single crystals, as we expect from the MBE growth. In the VLS growth of nanowires, normally twin boundaries are observed [36]. With our method, this problem is eliminated because the starting material is a CVD-grown single-crystalline 2D superlattice. The SiGe alloy etches faster than pure Si in a SF 6 / C 2 H 2 F 4 plasma, giving the edge of the wire a scalloped appearance, effectively introducing surface roughness to the sidewall. The etching process does not affect the crystallographic properties of the superlattice. The surface roughness may result in enhanced phonon scattering [37]. Si and Ge nanowires have been considered as potentially good thermoelectric materials, because of the reduced thermal conductivity at small dimensions. Superlattice nanowires have even greater potential because of the band offset between Si and SiGe [37]. Thus, electric conductivity is possibly improved through the superlattice structure [38,39]. The combination of excellent superlattice with edge roughness of our etched nanowires may therefore bring higher thermoelectric efficiency for group IV materials than has been possible so far. Conclusion In this paper, we introduce a one-step nanolithography method to fabricate quantum wires with diameters down to 15 nm using electrochemically deposited AgNPs. The AgNP density obtained is as high as 1.4 × 10 11 cm -2 with coverage up to 37% over a large area. By adjusting the pulse length and immersion time, the size and density of AgNPs can be well controlled. We demonstrate that these high-density AgNPs can be used as a hard etching mask to fabricate vertically aligned Si/SiGe superlattice nanowires. Because the method does not need lithography to define the pattern, it is much less expensive and can make very small patterns that may have considerable use even if the pattern is not totally uniform. The size and coverage of etched nanostructures only depend on the AgNPs. The method can be used with substrates of any material as long as it conducts sufficiently to form a cathode for the electrochemical process, and a proper etch chemistry to which Ag is resistant is available. The AgNP mask can be used in both metal-assisted etching to etch Si and SiGe and RIE to etch most other semiconductor materials. The method has the potential to make nanowires of different materials, as well as different orientations. It should be very useful in making devices requiring nanowires, such as nanothermoelectronic devices, that require a small size, narrow dispersion, and high density.
3,735.2
2011-07-11T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
A Survey on Time-Sensitive Networking Standards and Applications for Intelligent Driving : Stimulated by the increase in user demands and the development of intelligent driving, the automotive industry is pursuing high-bandwidth techniques, low-cost network deployment and deterministic data transmission. Time-sensitive networking (TSN) based on Ethernet provides a possible solution to these targets, which is arousing extensive attention from both academia and industry. We review TSN-related academic research papers published by major academic publishers and analyze research trends in TSN. This paper provides an up-to-date comprehensive survey of TSN-related standards, from the perspective of the physical layer, data link layer, network layer and protocol test. Then we classify intelligent driving products with TSN characteristics. With the consideration of more of the latest specified TSN protocols, we further analyze the minimum complete set of specifications and give the corresponding demo setup for the realization of TSN on automobiles. Open issues to be solved and trends of TSN are identified and analyzed, followed by possible solutions. Therefore, this paper can be an investigating basis and reference of TSN, especially for the TSN on automotive applications. Introduction The structure of traditional automobile network is relatively simple, where a controller connects with devices in its domain and different controllers do not interfere with each other.With the increase in user demands on various functionalities, the number of electrical control units (ECUs) of automobiles has gradually increased.Information exchange between ECUs has become more complicated and requires high bandwidth.In addition, with the popularization of the automatic data acquisition system (ADAS) for intelligent driving, more and more sensors, cameras and entertainment systems are being integrated into automobiles, which place higher performance requirements on the certainty, latency and jitter of automobile networks.Ethernet has a simple connection mechanism and protocol operation, which can provide 10 G, even 100 G, bandwidth for data transmission.Compared with traditional solutions, such as controller area network (CAN) [1], local interconnect network (LIN) [2], media-oriented system transport (MOST) [3] and FlexRay [4], Ethernet is a promising solution to in-vehicle networks and is more likely to be dominant.However, the definition of Ethernet fundamentally lacks attributes to guarantee deterministic, low-latency and jitter data transmission for time-sensitive and critical applications.Thereby, new networking techniques need to be studied to further develop Ethernet for automobile networks.In this context, time-sensitive networking (TSN) is proposed to enable real-time and deterministic transmission for critical traffic based on Ethernet hardware. The TSN family of standards is a tool set that offers reliability, determinism and time synchronization for safety-critical automotive communications over Ethernet links.The TSN standards leverage the previous work conducted within the IEEE 802.1 Working Group on IEEE audio video bridging (AVB).TSN is a set of specifications standardized by the Institute of Electrical and Electronics Engineers (IEEE) 802.1 work group (WG), the predecessor of which is audio video bridging (AVB) [5].AVB was firstly specified to support the real-time transmission of audio/video (A/V) traffic, which includes synchronization specification, simple resource reservation and scheduling specifications.As more timesensitive applications emerge, AVB standards are not only used for A/V transmissions but also to manufacture automation, automotive, mobile communication network front haul, etc. [6,7].Thus, EEE 802.1 WG renames AVB as TSN to better reflect the expanded scope and issues more specifications to improve the real-time capability and reliability of Ethernet.Nowadays, TSN provides various synchronization, resource reservation, queuing and scheduling, control and configuration, certainty, security and safety mechanisms.Updated versions and new specifications are still being developed.In addition to standards, both industry and academia also pay attention to the study of TSN, which are usually in terms of the following fields.Firstly, the time-synchronization designs are investigated, which make network devices synchronized to a reference clock with the accuracy between 1 µs and 10 ns.Secondly, resource-management schemes are designed, which reserve bandwidth for critical time-sensitive traffic with guaranteed latency.To further provide the bounded latency, some queuing and forwarding schemes are investigated, which give priority for critical traffic, while trying to reduce the side effects on general traffic to coexist with them.Thirdly, centralized, distributed, hybrid configuration models and configuration languages are studied to provide static or dynamic control on the synchronization, resource management and scheduling.Finally, security and certainty guaranteed schemes are studied, such as filtering, redundancy provision, link aggregation, etc. In this paper, we present all related standards on TSN, which not only include those specified by IEEE 802.1 WG but also those specified by the Internet Engineering Task Force (IETF), IEEE 802.3 WG and OPEN Alliance (OA).In addition, we discuss related products, and analyze the demo setup and promising techniques of TSN used for intelligent driving applications.The rest of the paper is organized as follows.Section 2 is the related work.Section 3 presents published and ongoing standards of TSN, which can be used for autonomous driving.Section 4 introduces vehicle TSN products, such as switch, endpoint and protocol stack.Section 5 gives a demo setup for the realization of TSN on a car.Then, open issues and trends of TSN are analyzed in Section 6.Finally, Section 7 concludes this paper.Table 1 summarizes the contribution of our work in comparison to previous relevant surveys. Table 1.A comparison of contribution between our survey and relevant surveys. Year Ref. Related Works Recently, some works have reviewed TSN from different perspectives.For example, the authors in [13] surveyed specified TSN in industrial communication and automotive in detail as well as their applicability to various industries.Nasrallah et al. provided an overview of ultra-low latency communication techniques, including TSN and fifth generation (5G) techniques used in various applications [14].A survey on techniques for the modeling from AVB to TSN and the recent advances in real-time Ethernet design methodologies from AVB to TSN are presented in [10].In addition, some works reviewed some key specifications of TSN, e.g., [15,16], who introduced some main specifications on the data link layer.The authors in [17] investigated the use of TSN on wireless systems for real-time industrial communication based on next-generation wireless standards, such as wireless TSN techniques, IEEE 802.11AX and 5G cellular systems.Kang et al. in [18] reviewed the trend towards the standardization of TSN in 5G networks and provided insights into wireless communication technologies for wireless TSN.There are few works that specifically focus on the TSN for automotive networks.Authors in [19] provided a review of several TSN standards in light of possible future use cases in automotive systems using in-vehicle Ethernet networks.The recent survey in [8] focused on hardware/software solutions for intelligent driving systems, which provided an overview of the current technological challenges in on-board and networked automotive systems, including TSN.The author in [20] investigated a partitioning system for TSN to support in-vehicle communications, which, by design, allows to dynamically add new traffic flows without impacting the flows defined by the carmaker at the design time.However, the detailed TSN towards intelligent driving has not been well addressed until now.Although much research work and investigation have been conducted on TSN, detailed applications of TSN in the area of smart driving have not yet been fully explored. TSN Related Standards Enabling time-sensitive and deterministic vehicle networks is systematic engineering, which refers to multiple layers with mutual cooperation.For applications of local area network (LAN), most specifications focus on the data link layer (Layer 2, mainly standardized by IEEE 802.1 TSN [21] group) based on switched Ethernet.IEEE 802.3 WG pays attention to the corresponding Ethernet physical layer (PHY) technique.IETF deterministic networking (DetNet) WG cooperates with the IEEE 802.1 TSN group, which provides the network layer (Layer 3) solution to deterministic routing.In addition, IETF defines the general architecture for Layers 2 and 3. OA focuses on the test standardization of TSN.There are several differences and similarities between these two standards.The main difference between DetNet and TSN is the layering in the OSI model.DetNet operates on the Layer 3 protocols, whereas TSN is confined to Layer 2. The data plane of these standards is also different.DetNet nodes can connect to other subnetworks, such as the optical transport network (OTN) and MPLS Traffic Engineering.TSN cannot achieve multi-layer systems, while DetNet can.However, TSN and DetNet share the same features, such as time synchronization, frame replication and elimination.We divided this section into the following four subsections based on the TSN standards within different layers.The first is PHY transmission-related standards.The second subsection is the TSN protocol related to the scheduling and configuration of the data link layer.The third subsection is divided into the standards for the DetNet network.This section is the TSN protocol of the third layer standardized by the IETF.The fourth subsection is the vehicle TSN testing standard developed by OA. PHY-Related Standards Automotive TSN networks based on Ethernet PHY usually use a single-pair twisted pair to reduce the cable weight.The transmission rate can support from 10 Mb/s to 10 Gb/s.There are three related specifications as compared in Table 2. IEEE 802.3bw [22] is the first automotive Ethernet specification which specifies 100 Mb/s Ethernet (100 BASE-T1) over a single twisted pair for automotive applications.IEEE 802.3bw [22] targets the reduction of the number of wires in wiring harnesses, which in turn reduces the cost and weight of a vehicle.In addition, IEEE 802.3bw provides a homogeneous in-vehicle network architecture with increased data speed for advanced applications, such as ADAS, infotainment (streaming music, video, DVD and BluRay) and the overall electrification of motorized vehicle functions.The transmission distance can reach 15 m for an unshielded twisted pair (up to 40 m for a shielded twisted pair).Although only a single-pair differential twisted pair is used, 100 Mb/s automotive Ethernet can also perform full-duplex communication by using echo cancellation technology.Targeting the lower cost and higher performance requirement of critical data, two more specifications are designed by IEEE 802.3 WG [25].IEEE 802.3cg [23] provides 10 Mb/s bandwidth, which further reduces the cost of network deployment by removing the need for switches and sharing the 10 Mb/s Ethernet medium between multiple devices.Compared with the industrial Ethernet, which generally uses multiple pairs of twisted-pair wires with more wire harnesses, and generally uses RJ45 interfaces, IEEE 802.3cg [23] makes automotive Ethernet have no specific connector, which is usually smaller and compact to greatly reduce the weight of the wiring harness in the car.IEEE 802.3ch [24] considers asymmetrical data rates of physical transmission.For example, 10 Gb/s is used for the forward channel (data from camera for video), and 100 Mb/s to 1 Gb/s for the backward channel (data to camera for control). MAC Related Standards Traditional Ethernet technology cannot satisfy the real-time synchronous transmission of data in audio and video networks.Therefore, the IEEE 802.1 working group established the AVB working group in 2005.Based on the existing Ethernet system, a series of new standards provide service quality guarantee for the transmission of audio and video streaming data through clock synchronization, bandwidth guarantee and traffic shaping.These standards are summarized in this subsection.Table 3 lists the TSN standard classifications and their applications.802.1AS [26] was developed based on the IEEE 1588 [39] precision time protocol (PTP) [40,41], which specifies the protocols, procedures, and managed objects used to ensure that the synchronization requirements are met for time-sensitive applications, such as audio, video, and time-sensitive control in [23].The standard defines a best master clock algorithm (BMCA) to select the time reference node, and a generalized precision time protocol (gPTP) to synchronize the clock of nodes, providing them the clock value of the reference node, called the grand master (GM): where t 01 and t 01 are the time of receiving sync messages at two interval time at the slave port, while t 00 and t 00 are the time of sending sync messages at two interval time at the master port.The other stage is that the slave port sends a delay request and the master port responds the delay request with two signaling messages to transmit the receiving time of delay request message and the sending time of the response of delay request, respectively, as shown in Figure 1.In this stage, the slave port can calculate the shift delay of reference time based on where t 1 and t 4 are the time of the sending delay request and receiving delay request response at the slave port, respectively.t 2 and t 3 are the time of receiving the delay request and sending the delay request response at the master port, respectively.Then, the slave port can adjust its time to the reference time, i.e., synchronization.The actor of ports, i.e., master or slave, can be designated by the controller in advance or selected based on some algorithms, say, the best master clock algorithm (BMCA).Through periodically sending the above signallings by devices, AS-aware devices can be synchronized.Thus, 802.1AS [26] enables systems to meet the respective jitter, wander, and time-synchronization requirements of time-sensitive applications, including applications that involve multiple streams delivered to multiple end stations.For autonomous driving applications, time synchronization is a preliminary and important aspect.For example, an ADAS decision is usually made based on the fusion-sensing information of different devices, such as radar and camera.These devices need to be synchronized before the data infusion.Thus, manufacturers choose it as the first realized standard of their TSN products. IEEE Std 802.1CB For supplying the deterministic network, 802.1CB [38] supplies transmission redundancy via frame replication at the transmitters and elimination at receivers [38].This standard supports sequence numbering, the replication of each packet in the source end station or network relay system, and the ability to eliminate duplicate packets in the targeted end system or other relay systems based on the sequence number carried in the frame.In this specification, the main defined functions are the stream identification function, sequencing function, individual recovery function, sequence encode/decode function and stream-splitting function.The stream identification function is mainly used for identifying and extracting the stream number of streams.For each stream, the sequence function orders the packet to recover the right order of received packets and discard repetitive packets.The discarding of packets at the receiver is realized by the individual recovery function.In the individual recovery function, transmitted packets are recovered.The sequence encode/decode function is responsible for adding the sequence number (i.e., the number of packets in a stream) or extracting the packet number from the received packets.The stream-splitting function can make multiple copies for a packet of a stream.In the same time, the stream number of packets are encoded to guarantee that copies of packets can be deleted at the receivers. Through sending packets on different routings, the successful delivery probability can be improved, especially when there are congestion relay nodes on some route.Thus, IEEE 802.1CB [38] is a major specification of TSN contributing to the transmission certainty.For autonomous driving applications, 802.1CB [38] is important to some critical traffic, e.g., braking and direction control.As we all know, the L4 level autonomous driving requires a redundant processor.With IEEE 802.1CB [38], devices can establish the communication mechanism between the main processing system and the redundant processing system.In addition, IEEE 802.1AX provides a way for aggregating the original link and redundant link [42].Specifically, 802.1AX enables multiple paths to be merged together as a link aggregation group (LAG), and then the end station can treat the LAG as a link to process.As a result, the bad part of the aggregated multiple links does not affect the correct transmission of data, and the successful transmission probability can be improved.However, the provided transmission certainty of these time-sensitive applications is guaranteed by 802.1CB [38] at the cost of an increase in the occupied bandwidth. IEEE Std 802.1Qci This specification mainly filters and suppresses ingress flows, which is realized by controlling ingress gates based on an access table [22].In detail, a stream filter instance table records an ordered list of stream filters, which defines the filtering and policing actions to be applied to the frames of a specific stream.A stream gate is controlled by a state table, which determines whether a frame is allowed to pass through the gate or not by opening the gate or closing the gate. With the filtering, IEEE 802.1Qci [37] provides for quality of service (QoS) protection by traffic suppression and traffic blocking.For example, the traffic of denial of service (DoS) attacks usually attacks the network via bursty and abundant traffic.The IEEE 802.1Qci [37] filter performs per-flow filtering by matching frames with permitted stream identifications (IDs) and priority levels.Then, the ingress filter can detect whether the stream rate is larger than its reserved bandwidth or not.If the stream rate exceeds the permitted rate, the filter will suppress the rate to the reserved bandwidth by controlling the ingress gate.In addition, the filter can also prevent network attacks (such as address resolution protocol (ARP) attacks) to keep the attacks out by checking the stream IDs.If the stream ID is unrecognized, the stream will be blocked.For automotive applications, IEEE 802.1Qci [37] can be used for the ingress management and providing security.These specifications define different time-aware queuing and forwarding protocols to provide low-latency service for time-sensitive applications [43].IEEE Std 802.1Qav [30] provides a credit-based shaper (CBS) scheduling scheme for packets with different priorities of time-sensitive traffic.In this method, the transmission time is determined by a credit.When the credit value of frame is not negative, the frame can be transmitted from the egress.Otherwise, the frame is not allowed for transmission.The value of the credit depends on the reserved bandwidth of this stream.IEEE Std 802.1Qat [35] provides a method of bandwidth reservation for time-sensitive streams.The amount of reserved bandwidth is calculated based on the priority of the traffic, frame duration, and maximal data size per frame.Through reserving a certain bandwidth on the end-to-end transmission path of a stream, the transmission latency can be guaranteed, which is critical to time-sensitive traffic.Note that only time-sensitive applications are scheduled by CBS in IEEE Std 802.1Qav [30] and Qat [35].General applications are default scheduled by the strict priority scheme.In the strict priority scheme, streams are forwarded based on their priority, where higher priority is forwarded more preferentially.For IEEE Std 802.1Qbv [31], it schedules both time-sensitive and general applications by activating or deactivating queues at the egress port, where each queue corresponds to a priority and a gate.Through opening or closing a gate for a queue and controlling the duration time of opening a gate, 802.1Qbv [31] controls the available bandwidth of different queues.The scheduling methods of Qav, Qbv and strict priority scheme can be used together or separately. IEEE 802.1Qch [32] utilizes two queues at the egress port for cyclic queuing and forwarding.Only a queue transmits at any time.While one queue is enabled, all received messages during this time are allocated to the respective other queues (which are disabled).For further providing the transmission certainty of critical traffic, frame preemption can be used by combining it with the above scheduling methods such as those of Qbv and Qch.The frame preemption is standardized by IEEE Std 802.1Qbu [28].802.1Qbu [28] enables a time-critical frame to preempt the transmission time of non-time-critical frames to guarantee low latency. These TSN scheduling methods provide possible solutions to support future autonomous driving, particularly for tight control applications, such as steering, braking, and propulsion over Ethernet.For example, the delivery of chassis control data should be in accordance with strict latency without room for compromise.The automotive industry generally requires that the chassis system delay does not exceed 5 ms, preferably 2.5 ms or 1 ms.This is also the biggest difference between automotive Ethernet and general Ethernet.Some traffic only requires to do their best, such as the entertainment system data, which can be flexibly controlled.These TSN scheduling methods also provide a coexistent way for critical traffic with general traffic.However, the side effects of the scheduling of critical traffic on that of the best-effort traffic should be further investigated and reduced. IEEE 802.1Qcc IEEE 802.1Qcc [36] specifies protocols, programs, and managed objects to configure network resources for time-sensitive applications [36].It provides network configuration from the speaker to the listener to meet the requirements of applications, such as transmission delay.The configuration can be classified as three models, fully distributed, centralized/distributed, and fully centralized, the former two of which are focused.For the fully distributed configuration, the network is configured by individuals in a completely distributed manner, and there is no centralized network configuration entity.The distributed network configuration is performed by using a protocol that propagates TSN user/network configuration information along the active topology of the flow.As the user needs to propagate in each bridge, the resource management of the bridge is effectively performed locally.This local management is limited to the information known to the bridge, and does not necessarily include the information of the entire network.For the central-ized/distributed network configuration, it specifies the management object for the bridge configuration through the centralized network configuration (CNC) component.With CNC, some complicated calculation can be performed on it instead of being performed by each end station and bridge.In addition, it is beneficial for using CNC to collect global information of the whole network; as a result, the performance of some TSN standardizations can be improved based on global scheduling, such as Qat, Qav, Qbv, Qbu, etc. Current automotive TSN products usually use fixed configuration instead of realizing this standard since the topology and transmission environment is simple and more static compared with that of the industry network.For example, automotive application requirements are usually fixed.For a large class of real-time applications, including many cyber-physical systems (CPSs), much of the time-sensitive network traffic from sensors or actuators is predictable and periodic, whether it is 1 cycle/s or 32,000 cycles/s, which makes a fixed schedule feasible.End stations and bridges do not necessary calculate the configuration, such as reserved bandwidth, for each stream frequently.However, Qcc can also be used for automotive networks, e.g., we can use a pre-configuration based on Qcc for traffic scheduling and transmission certainty. Layer 3 Related Standards Layer 3 networking for the QoS guarantee (also called as DetNet networking) is standardized by IETF, which collaborates with TSN WG to provide flows with extremely low packet loss rates, an upper bound of the out-of-order packet delivery and assured maximum end-to-end delivery latency.Three techniques are used for providing these QoS requirements, i.e., resource allocation, service protection and explicit routes [44].In general, DetNet focuses on extending the TSN data and control plane into the Layer 3 domain, thus expanding the scope of TSN beyond LANs.For automotive applications, this explanation is useful for deterministic vehicle-to-everything (V2X) transmission. Data Plane Framework We firstly discuss related specifications on the data plane of Layer 3. The architecture of related data plane functions can be decomposed into two sub-layers, a service sub-layer and a forwarding sub-layer as shown in Figure 2 [45].The service sub-layer provides service protection, such as packet replication, elimination, and packet ordering.The frame replication and elimination for reliability (FRER) is realized by transmitting packets and their duplicates along different paths and routers, which is similar to that of 802.1CB, while 802.1CB [38] performs frame replication and elimination within a LAN.We note that frame duplication, routing, and elimination are non-trivial tasks that will likely require centralized management.Hence, such protocols can be combined with other standards, e.g., 802.1Qcc [36] and 802.1Qca [34], to ensure seamless redundancy and fast recovery in time-sensitive networks.For the ordering function, it uses the sequence number, which is added to each packet to order the packets.The sequence number can be encoded into existing standardized headers.With the aid of the sequence number, it can enable a range of the packet order by dropping out-of-order packets or reordering some out-of-order packets with a tolerable time delay. The forwarding sub-layer guarantees the QoS of flow based on existing queuing techniques and traffic engineering methods of internet protocol (IP) networks and multiprotocol label-switching (MPLS) networks.For example, the forwarding sub-layer encodes specific flow attributes (flow identity and sequence number) into packets to provide low loss and assured latency.DetNet routers ensure that DetNet service requirements are met per hop by allocating local resources and mapping the service requirements of each flow to appropriate sub-network mechanisms.The forwarding sub-layer can also use underlaying connectivity, such as TSN, to guarantee the QoS [45].Some further functions of the forwarding sub-layer are also considered in this specification.Firstly, the resource reservation can be used for a prioritized end-to-end flow.Secondly, the explicit route which pre-configures a path with a certain bandwidth can be used for controlling the latency of a flow.Thirdly, service protection can be studied, which uses multiple packet streams with multiple paths, based on which network coding at different routers can be easily implemented for further flow security and transmission efficiency. Control Plane and Configuration Although the current DetNet WG focuses on the data plane, some preliminary concepts and requirements of control plan are briefly described.IETF specifies that the control plane should instantiate flows in a DetNet domain in [45].For a flow, corresponding control should be instantiated both in terms of the requirements of the service sub-layer and forwarding sub-layer.For the forwarding sublayer, the control plane refers to the determination of explicit routing, resource reservations, queuing, etc.For example, the control plane can advertise link resources, such as capabilities and adjacency to control nodes for resource reservation.For the service sub-layer, the control plane refers to the flow ID, flow aggregation, etc.For example, it can insert flow ID and packet ID by managing the allocation and distribution of the S-Label and F-Label of MPLS.In addition, the control plane can provide flow identification information at each of the nodes along the path.These control plane services can be implemented by using distributed control protocol signaling, centralized network management provisioning mechanisms or hybrid mechanisms.How to perform control is independent of the data plane.The concern of the data plane is only the control results of control plane.However, the implementation method of control will affect the efficiency of the data plane.For example, the centralized control can take advantage of global tracking of resources in the DetNet domain for better overall network resource optimization, while the distributed control is more scalable. For the configuration, a YANG model is specified, which describes the parameters needed for DetNet flow configuration and flow status reporting [46].By using this model, the configuration can be acquired by nodes along a flow transmission path.As a result, these nodes can allocate resources, queue and forward packets, and replicate and estimate order packets according to the configuration information.These actions thereby provide a bounded latency and zero congestion loss end-to-end service along the path without any signaling protocols.In detail, this model defines application flow configuration, service sub-layer configuration, and forwarding sub-layer configuration.For the application flow configuration, it maps the application flow (the payload carried over a DetNet service) to the DetNet flow (a sequence of packets with an unique flow identifier, and to which the DetNet service is to be provided) at the ingress node and then maps the DetNet flow to the application flow at the egress node.For the forwarding sub-layer configuration, it is specified to support congestion protection and the explicit route.For the congestion protection, resource reservation, flow shaping, filtering and policing are usually used, which need to know the information of packets.Therefore, the forwarding sub-layer configuration defines some traffic specification attributes, such as the transmission duration of traffic, the maximum number of packets per transmission duration, and the maximum data size of the packet.For the explicit route, the configuration depends on the employed routing schemes.With a designated routing scheme, the configuration node can then calculate the delivery path of flow.For service sub-layer configuration, the model configures the flow identification and service function indication, which are used for identifying a flow and a service function invoked at a DetNet node, respectively. IP over TSN To enable Layer 3 to cooperate with TSN, IETF specifies related works on IP over TSN, which describes how IP is used by DetNet nodes, i.e., hosts and routers, to identify DetNet flows and provide a DetNet service [47].From a data plane perspective, DetNet IP only supports the forwarding layer, which is used for providing congestion protection, such as low loss, assured latency and limited out-of-order delivery.The service protection of service sub-layer, such as packet replication and elimination, can be provided by technologies such as MPLS and IEEE 802.1 TSN [21].To enable IP to identify deterministic flows and provide a deterministic service based on an IP data plane, existing IP and higher-layer protocol header information is used without DetNet-specific encapsulation. Figure 3 As shown in Figure 3, the TSN sub-network can be seen as a hop of the end-to-end path from the IP perspective.In order to use a TSN sub-network between IP nodes, two problems should be solved.Firstly, the forwarding path of packets in the sub-network should be known.Secondly, flow-related parameters or requirements should be converted to those of the packet sequence in the sub-network.For the first problem, it can be solved by mapping an ingress unicast IP flow to a specific Layer 2 multicast destination media access control (MAC) address and a virtual local area network (VLAN).Then the packet can be forwarded in a TSN sub-network.At the other end of the TSN sub-network, the destination address is converted to an IP address to make the packet transmit through a LAN.One method of mapping between IP flow identifiers and TSN stream identifiers is provided explicitly by configuration.The other method is performed by a TSN-aware IP node via information provided for configuration of the TSN stream identification functions (e.g., IP stream identification, mask-and-match stream identification and active stream identification function provided in 802.1CB [38] and 802.1CBdb [49]. For the second problem, Ethernet encapsulation is performed to encode flow-related parameters and requirements.Then, the TSN node can obtain the service requirements, such as successful transmission probability by decoding the encapsulation.To guarantee service requirements, TSN methods are used, such as FRER provided in 802.1CB [38].In addition, centralized or distributed resource allocation and the scheduling method can be used from a general perspective by regarding a TSN sub-network as an IP node in the IP networks. MLSP over TSN When integrating TSN with MPLS, a TSN sub-network can also be seen as a single-hop connection between MPLS nodes.At the current state, interworking across the DetNet MPLS network and the TSN network is not available.Similar to IP over TSN, the TSN edge port converts an ingress unicast MPLS flow to use a specific Layer 2 multicast destination MAC address and a VLAN, to direct the packet through a specific path inside the bridged network.A similar interworking function pair at the other end of the TSN sub-network will restore the packet to its original Layer 2 destination MAC address and VLAN [48].In detail, the mapping between a MPLS flow and a TSN stream can be operated at the frame level via passive or active stream identification functions.In the passive stream identification function, the MPLS label of MPLS flow is cached for the mapping.For example, IEEE P 802.1CBdb defines a mask-and-match stream identification function that can be used as a passive function for MPLS flows.In the active stream identification function, the Ethernet header is modified according to the ID of the mapped TSN stream.For example, IEEE 802.1CB [38] defines an active destination MAC and VLAN stream identification function, which can replace some Ethernet header fields, i.e., the destination MAC address, the VLAN ID and priority parameters with alternate values. IETF standardization focuses on the TSN-aware MPLS node and splits the TSN-aware MPLS node into a TSN-unaware talker/listener and a TSN relay.Before the transmission and reception of a stream to/from a TSN sub-network, TSN subnetwork-specific Ethernet encapsulation should be inserted or removed for a MPLS flow, which is usually performed by an edge node located at the boundary of a domain.These MPLS edge nodes not only perform transformation between the MPLS flow and TSN flow but also are service sub-layer aware.Flow requirements within the TSN sub-network can be guaranteed by Layer 2 time-sensitive techniques.Outside the TSN sub-network, MPLS nodes can also use PRER to enhance the reliability of delivery. Test Standards To evaluate the effects of TSN on time-sensitive applications and general traffic, test specifications are gradually worked out.The main contributor is OA, which was jointly established by related companies such as NXP, Broadcom and BMW in 2011 and currently has more than 340 members.To apply Ethernet-based communications and TSN to automotive networks, OA formulates and unifies the physical layer, protocol consistency and interoperability specifications of IEEE 100 base-t1, 1000 base-t1 and 1000 base-rh communication methods.OA presents some specifications on the tests of the wiring harness, switch, ECU and other functional requirements, e.g., ECU-level physical layer, data link layer, TCP/IP protocol layer, SOME/IP test specifications formulated by the technology committee (TC) 8, and the automotive wiring harness and connector test specifications formulated by TC 2, and interoperability, compliance, and electro-magnetic compatibility (EMC) requirements and test methods for 10 BASE-T1 PHYs standardized by TC 14. Most of the 14 TCs of OA focus on the PHY layer, protocol consistency and interoperability, which are the basis of TSN implementation.The TC focusing on the TSN protocol test itself is TC 11, which creates specification and qualification requirements for Ethernet switches.In detail, TC 11 defines functional features for switch semiconductors (standalone or built-in), and gives the interfacing, configuration diagnostics and monitoring of switches, which can be used for TSN test.In addition, TC 11 specifies tests on TSN requirements and characteristics, such as QoS requirements, queuing, time stamping, policing, and filtering [50].In TC 11 [50], the requirements and test points of switches are specified.For example, it indicates that the Ethernet switch shall support at least eight different levels of priorities according to IEEE 802.1Q and provide a queue for each priority on each egress port to support different QoSs of TSN.The Ethernet switch can overwrite the priority of a frame at an ingress port independent of the incoming priority (i.e., support global priority overwrite).Incoming priorities shall be freely mapped to internal queues by the ingress filter.Frames of internal queues shall be freely mapped to priorities according to IEEE 802.1Q [43] on the egress port.For each queue at the egress port, it has a shaper to schedule frames.The shaper supports strict priority scheduling and the CBS algorithm according to IEEE 802.1Qav [30].And each queue can deactivate each shaper individually to cancel TSN scheduling.For time synchronization, TC 11 specifies that the Ethernet switch should support both the PTP 1588 protocol and IEEE 802.1AS [26] protocol.In addition, each port should synchronize with each other.For the diagnostics and robustness, the Ethernet switch shall provide at least the following counters individually for each port: number of received frames, number of received bytes, number of dropped frames after reception, number of sent frames, number of unsuccessful sent frames, number of sent bytes, and maximum fill level of the queues since clearing the counter. In [50], TC 11 also provides a collection of all test cases which are recommended to be considered for automotive use cases and should be referred by car manufacturers within their quality-control processes.In detail, it presents the test procedures of time synchronization based on TSN, which checks the 1-step frame forwarding mechanism, including the correct implementation of residence time measurement.The test station sends sync frames to the PTP slave port and receives frames on all PTP master ports of the device under test (DUT).The corresponding time stamps of the test station are recorded.The correction time of the sync message is checked if the value correlates to the timestamp measurements of the test station.In addition, it checks whether the switch supports priority-based QoS or not by using all eight possible values.The strict priority algorithm is utilized as a forwarding selection mechanism in order to verify that forwarding is based on priorities. TSN-Related Products Early TSN products were generally used for industrial automation with the realization of main protocols, such as EEE 802.1AS [26], IEEE 802.1Qav [30], and IEEE 802.1Qat [35].With the development of TSN and autonomous driving, some TSN products for automotive are designed.Active manufacturers in this area mainly include TTTech, Microchip, NXP, Excelfore, Broadcom, Marvell's, Spirent, etc.In this section, we briefly review some typical TSN products for intelligent driving during recent years. In terms of TSN switch chip, a NXP product, NXP sja1110, is the first automotive Ethernet switch, which was designed to solve the huge challenges faced by current in-vehicle networks, including scalability, reliability, security, and high-speed traffic engineering.This switch complies with the AVB/TSN synchronization standard.In addition, NXP designed the SJA1105T chip, which is a core of multi-functional product.This switch chip supports a network with standard Ethernet, which not only supports best-effort business but also QoS-required traffic by using TSN for clock synchronization and time-aware shaping.The Microchip Corporation designed a series of Ethernet switches, such as KSZ8565, KSZ8765 and KSZ8842, which support TSN characters, including IEEE 1588 v2 PTP.Broadcom BCM8956X series devices are Broadcom's fifth-generation fully integrated L2+ multilayer switch solution, which supports AVB protocol stack (IEEE 802.1AS [26] time synchronization and IEEE 802.1Qat [35]).Except for the realization of the basic specifications of AVB, Marvell developed a series of products with more TSN specifications.For example, the switches 88Q5072 and 88Q6113 of Marvell addeds TSN features to achieve the filtering and control of data streams (IEEE 802.1Qci [37]) and frame preemption (IEEE 802.1Qbu [28]).The integrated L3 hardware accelerator allows a gigabit routing throughput of up to 10 Gbps to be achieved without internal processor intervention.To promote big data transmission in the vehicle network, these devices provide efficient sleep/wake functions that support the TC 10 standard, reducing the overall power consumption.Marvell 88Q5050 is an eight-port, high-security automotive gigabit Ethernet switching chip, which has advanced security features to prevent cyber threats, such as DoS attacks.The eight-port Ethernet switch chip has four fixed IEEE 100 BASET1 [22] ports and four configurable ports.The switch chip provides local and remote management functions, and users can easily access and configure the device. In terms of the TSN protocol stack, Excelfore eAVB/TSN now runs in cameras, video displays, head units and ECUs from numerous vendors.The Excelfore eAVB/TSN has already been ported to automotive-grade operating systems, including Linux, Mentor automotive open system architecture (AUTOSAR) and Green Hills Software INTEGRITY [51].The Excelfore protocol stacks integrated and optimized for use with the safe and secure IN-TEGRITY RTOS from Green Hills Software [51], including support for Ethernet AVB/TSN Talker/Listener, DoIP, SOME/IP, and RTP/RTCP (including IEEE 1733) and 802.1AS [26] slave/bridging. In terms of TSN testing, TTTech designed a combination switch ECU called DESwitch Hermes 3/1 BRR, which is used for evaluating a variety of communication standards, including AVB, TSN and time-triggered Ethernet (SAE AS6802).With these technologies, users can evaluate the convergence of Ethernet control traffic, including security applications and the vehicle backbone architecture.Polelink developed a TSN test tool for automotive Ethernet called the TSN box.This TSN box is a network interface and gateway for TSN network.It was developed based on field programmable gate array (FPGA) technology to serve as a data collection medium for TSN tools, which supports nanosecond timestamps for time synchronization among multiple TSN boxes. In addition, the TSN box provides rich functional support for AVB and TSN protocols commonly used in automotive Ethernet architectures, which can be used for exploring PTPv2, 802.1ASrev [27] and different TSN shaping algorithms, such as CBS, timesensitive or asynchronous shaping.Xinertai launched an automotive Ethernet test program based on the proprietary BigTao hardware test platform.Cooperating with Xinerta's software Renix [52], the Ethernet test program can realize Layer 2-7 traffic test and protocol simulation for automotive Ethernet, support 100/1000 Base-T1 port connectivity test, RFC2889/RFC2544/RFC3918 standard test suite, routing and switching protocol testing, AVB/TSN protocol testing, distributed denial-of-service (DDoS) attack testing, long-term (such as 10 * 24 h) stability and streaming testing, etc. Spirent issued the AUTOSAR conformance test suite pack, which provides different protocol conformance test suites according to the OA test specification.Through this test suite, automotive Ethernet tests can be run on Spirent C1 and C50 devices, which supports testing on clock synchronization and 802.1 Qav [30] scheduling of TSN. For the vehicle Ethernet PHY chip, it must firstly meet the IEEE 802.3bw or IEEE 802.3bp protocol, and then must pass the AEC-Q 100 standard.The existing semiconductor manufacturers that have launched automotive Ethernet PHY chips include BCM 89610, BCM 89611, BCM 8988X, BCM 89810, BCM 89811 and BCM 89820 of Broadcom, AR 8031 of Artheros, TJA 1100, TJA 1101 and TJA 1102 of NXP.For example, NXP TJA 1101 is based on the IEEE 100 BASE-T1 standard, with the single-port Ethernet PHY transceiver.NXP TJA 1101 meets the needs of automotive applications and supports 100 Mb/s transmission, and its receiving capacity is over 15 m of the unshielded twisted pair.TJA 1100 can achieve the lowest system cost, and meet the strict restrictions on area and heat dissipation of the sensors of the new generation of ECU and ADAS.It complies with AEC-Q 100 level 1, and the original design intention has the smallest package size, the lowest external component overhead and low power consumption. Demo Setup of TSN To realize basic functions of TSN and provide a deterministic network for autonomous driving, the least complete set of standards to be realized should be considered.The complete set of most TSN products is usually constructed by standards of AVB, i.e., 802.1AS [26], 802.1Qav [30], and 802.1Qat [35], which are mainly used for A/V streams.Here, we further consider some recent TSN specifications for the basic set.Before studying the least complete set, we firstly present the traffic classes and requirements of vehicular applications, which are shown in Table 4. Safety-relevant devices, such as multiple kinds of sensors, need synchronization with each other to infuse them.In addition, synchronization is the preliminary step of many scheduling and management schemes.Thus, the 802.1AS [26] specification needs to be realized first.To provide deterministic transmission for critical traffic, such as the control comment, bandwidth reservation is needed.On the other hand, the traffic class is finite and fixed, compared with that of industry.Thus, a preferred bandwidth reservation method is pre-allocating the bandwidth for different application traffic instead of 802.1Qat [35] to reduce the signaling overhead and related information storage brought by the stream register of 802.1Qat.Correspondingly, a scheduling method is needed for bridges and end stations to queue and forward frames with different classes.802.1Qav [30] is preferred for data transmission within a domain.For data transmission among multiple domains, 802.1Qbv [31] is an alternative method.In addition, 802.1CB can provide a baseline for giving redundant paths and supporting robustness.802.1Qcc [36] can provide corresponding configuration for these protocols.Other specifications can be further realized for a more robust and deterministic network.The basic TSN protocol stack model of the switch of in-vehicle networks is shown in Figure 4, where blue blocks construct a minimum complete set of standards to be realized for a TSN-supported bridge of automotive networks.The end station can be seen as a bridge with a port.For the TSN demo set up, the above functions are expected to be examined and displayed.Here, we give a demo setup for some typical functions, such as that of 802.1AS, 802.1Qbv, and 802.1CB, which is shown in Figure 5.As shown in this figure, synchronization is examined by observing whether two A/V traffic generators are synchronized or not.They can be two AVB cameras recording the same view.In Display 1, it shows the views of two cameras, respectively.When they are synchronized, the displayed views of the two cameras are the same.This is an intuitive show.More accurately, it can be tested by using time-record software.Two cameras photograph the software with the time display and transmit them to DCU 1.Then, we can see whether the transmitted pictures with time are the same or not.802.1Qbv [31] is checked by using three traffic generators, i.e., A/V traffic, best-effort traffic and control traffic.With the interfering traffic (BE traffic and A/V traffic), the delay and jitter performance of control traffic can be observed, which should not be affected by the interfering traffic and have guaranteed latency and low jitter based on Qbv.For the 802.1CB demo, redundant routing is used for critical traffic.The robustness of the traffic transmission can be observed by allowing routing congestion with abundant traffic. Port We simulated the time-synchronization effect of 802.1AS [26] and the traffic-scheduling performance of 802.1Qbv [31].The simulation environment developed here is based on FPGA, and the PHY chip is Realtek RTL8201CP, which has a clock frequency of 25 MHz at 100 Mbps, i.e., the clock accuracy is 40 ns.The corresponding waveforms were obtained and analyzed experimentally. As shown in Figure 6, the vertical coordinate time offset represents the time deviation between nodes measured at each time in µs, and the horizontal coordinate time represents the time-synchronization interval, which is 1 s.After a short period of jitter at the beginning of the time synchronization, the time deviation between nodes tends to converge; the final value of this time-synchronous convergence converges to 14.5 µs.The simulation experimental results show the effectiveness of the time synchronization, which shows that AS is feasible in the demo setup. In Figure 7, the vertical axis end-to-end latency represents the total time it takes for a data packet to be sent from the sender to the receiver, measured in microseconds (µs).The horizontal coordinate sampling times indicates the number of times the data are sampled at the same time interval.The impact of enabling the time-synchronization feature on the traffic-scheduling performance of the TAS algorithm is verified by comparing with and without enabling time synchronization, and the results show that the end-to-end delay of traffic is reduced by an average of 80 µs when time synchronization is enabled.In addition, simulation experiments show that Qbv achieves end-to-end low latency performance under time synchronization, which also validates the effectiveness and feasibility of the demo setup. Open Issues and Trends In general, TSN is an emerging technology, for which many problems need to be solved.In this section, several promising research directions with respect to TSN are discussed as follows.This section discusses the challenges and open issues in time-sensitive networking (TSN), first highlighting the need for a grand master and a comprehensive resource allocation and scheduling plan and discussing the advantages and disadvantages of centralized and distributed scheduling methods, as well as highlighting the importance of an efficient and flexible network solution to support real-time applications in the automotive industry. Synchronization An open aspect of time synchronization is the efficient choice of the grand master (GM).GM provides the reference time for other devices of the network.However, the standardized BMCA needs frequent information exchange and comparison to select the best clock among all devices and ports.This will bring much overhead cost and it is against the reduction in latency.Specially, the main current forwarding and queuing schemes are dependent on the synchronization time.Therefore, this GM selection ultimately affects the efficiency and results of scheduling.For the automotive network, it usually has simple and static topology with known clock accuracy of its devices.It is an alternative method to pre-designate a GM.However, the damage of GM under this situation will lead to non-synchronization and failure to implement TSN.Preparing some alternative GMs may be a potential approach to solve this problem.How to specify a GM and determine the number of alternative GMs need to be further studied. Resource Management The main specified bandwidth reservation protocol is provided in 802.1Qat [35], which gives a decentralized stream registration and resource reservation procedure.However, any new arrival, leaving, or new requirements of streams will lead to signaling exchanges along the transmission path of the streams.This overhead becomes significant with the increases in hops.Thus, it cannot provide strict transmission latency for critical traffic.To overcome this shortage, 802.1Qcc [36] provides a way to control the network globally.But it is still unclear as to how to appropriately allocate resources to all traffic with different priorities in the network from a global view instead of only allocating resource to critical traffic.A comprehensive resource allocation plan is conducive to the efficient use of all resources and minimizing the loss of non-critical services.In addition, although 802.1Qcc [36] provides a possibility of allocating resources with the consideration of scheduling, a joint resource allocation and scheduling approach needs to be further studied.Without considering scheduling, the resource allocation scheme cannot be efficiently realized by queuing and forwarding.Last but not the least, efficient resource allocation should consider the link state and the traffic character.Static resource allocation for different traffic classes with fixed allocated bandwidth ratios not only induces resource waste but also is harmful for guaranteeing the transmission latency of critical traffic, especially for bursty data or busy networks. Scheduling Scheduling methods are the core of current TSN specifications and studies.With scheduling, it not only eases bursty traffic and reduces the network congestion but also queues and forwards different traffic reasonably according to their importance.Related scheduling methods can be divided as centralized and distributed.Distributed scheduling will not strictly schedule traffic by obeying the allocated bandwidth.Therefore, its scheduling only obeys the reserved bandwidth, softly leading to latency exceedance, especially for a topology with multi-hops.In addition, information exchange also brings abundant overhead for dynamic topology or long path.For automotive applications, some unexpected traffic, such as the braking command, requires urgent transmission, which also brings the program of resource allocation and scheduling.A distributed scheduling is not efficient in such cases.In contrast, centralized scheduling can schedule traffic without frequent information exchange and schedule all nodes along the path simultaneously.However, the selection, deployment, and robustness of the central node will affect the scheduling results significantly.How to balance the advantages of both types of scheduling methods is an open issue.In addition, the queuing of different traffic are separately which may use different scheduling schemes.A unified scheduling is better for improving the scheduling efficiency.For example, we study how to unify the Qav, Qbv, Qch, strict priority and asynchronous schemes used in the egress port.Thirdly, different types of traffic have different latency requirements.Although the TAS is a novel feature for the ultra-low-latency transmission of time-critical traffic, it has high implementation complexity and additional overhead due to the GCL schedule generation/deployment and time synchronization.Moreover, it is unsuitable for aperiodic traffic flows.Different scheduling methods should be used for different types of traffic.For example, for periodic traffic, more static scheduling, such as time-triggered scheduling, can be used.For sporadic periodic traffic, how to efficiently schedule is a study direction.Lastly, the current time-triggered scheduling can be preempted by time-sensitive applications designed in the specification.However, the efficiency of this preemption should be investigated especially for the busy network scenario.In the resource allocation method, non-critical traffic is originally allocated less bandwidth.The preemption of the guarded bandwidth by critical traffic may be trivial.Hence, an important future work direction is to develop traffic transmission schedules with reduced numbers of guard band occurrences in order to prevent wasted bandwidth and to keep latencies low.Scheduling based on traffic predication and being more flexible may be expected. Configuration The configuration models can be divided into centralized and distributed groups.The hybrid configuration which also supports existing distributed resource allocation methods, such as the stream registration of 802.1Qat as a research direction.In the hybrid configuration, the centralized node can provide configuration in time and optimize performance from a global view, while distributed nodes can cooperate the configuration in case the centralized node is out of work.Even so, the configuration cost and time still need to be studied for reduction.This is because that higher configuration cost will occupy higher limited bandwidth, and longer configuration time induces longer time delay for critical traffic.Therefore, the communication and computation requirements should be considered for the configuration node design and selection.In the future zone architecture of the car, the domain controller may be a candidate for the configuration node.Traditionally, the in-vehicle network is configured statically at the designed time to guarantee the quality of service (QoS) requirements of the applications.Due to the static network configuration, it is normally difficult to introduce new applications during the lifetime of the vehicle.The need for an IVN supporting dynamic traffic will increase as the number of features and functionalities requiring dynamic traffic handling in the vehicle increases.Some of the dynamic applications are vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) and vehicle-to-network (V2N), adaptive cruise control, truck trailer systems and over-the-air (OTA) software updates [53].Therefore, automotive applications require dynamic reconfiguration facilities to meet the requirements of new evolving features.Several studies suggest that complementing TSN with a networking concept, such as software-defined networks (SDNs) is a beneficial configuration solution [54][55][56].With additional protocols (e.g., Netconf and Openflow), SDN allows for the instant configuration of routes and transport schedules based on a central control plane.It also allows splitting up flows for transmission on multiple paths for load balancing, using the available bandwidth more efficiently, and making network-wide configurations, such as time synchronization. Robust and Certainty It is clear that there is a trade-off between transmission certainty and resource cost.For example, 802.1CB [38] uses redundant transmission to improve the successful transmission probability of critical traffic.The obtained end-to-end topologyis helpful for the relay to determine the redundant paths.This topology cannot be obtained by each relay of 802.1CB.Although 802.1Qca can pre-define redundant paths for each stream, it cannot intelligently determine the redundant paths based on the available bandwidth and the requirements of other applications.This will lead to a lack of enough conditions for executing redundant transmission.This problem is especially serious for distributed redundancy schemes, such as 802.1CB due to the lack of topology information.On some congestion paths, the replication of packets will bring more burden to some links, leading to uncertainty and high latency.Although some alternative links may successfully transmit packets of stream, packet transmission on the congestion link only aggregates this condition without improving the robustness. Ongoing TSN Standards There is a special standard draft, IEEE P802.1DG [57], which describes the TSN profile for automotive Ethernet communications.This standard specifies profiles for secure, highly reliable, deterministic latency, automotive in-vehicle bridged IEEE 802.3 Ethernet networks based on IEEE 802.1 TSN standards and IEEE 802.1 security standards [21].This standard provides profiles for designers and implementers of deterministic IEEE 802.3 Ethernet networks [53] that support the entire range of automobile applications including those requiring security, high availability and reliability, maintainability, and bounded latency. In addition, scheduling independent of synchronization is also an open issue.Current standardized scheduling methods need synchronization among network nodes, while the synchronization affects the scheduling performance.IEEE P802.1Qcr [33] is a specification draft on asynchronous traffic shaping (ATS), which operates asynchronously, i.e., bridges and end stations need not be synchronized in time.The ATS is originated from the urgencybased scheduler (UBS) [58], which implements a per-flow interleaved regulator [59] based on rate-controlled service disciplines, providing deterministic latency with low implementation complexity.In the provided asynchronous traffic shaping method, it prioritizes urgent traffic over relaxed traffic.Thus, ATS can utilize the bandwidth efficiently, even when operating under high link utilization with mixed traffic loads, i.e., both periodic and sporadic traffic.By using these queuing and forwarding schemes, not only the latency of time-sensitive traffic can be reduced and guaranteed but also the stream arrival can be smoothed, as a result of which the network congestion can be reduced.As future research direction, one relevant one is the IEEE 802.1Qcr-2020 [33] standard, which is very promising, as it offers bounded latency asynchronous shaping with robustness properties (e.g., integrated policing) and permits compositional timing analysis. This draft also specifies an information model for the capabilities of asynchronous traffic shaping.It further specifies a YANG data model and management information base (MIB) modules to support configuration and status reporting.Additionally, it provides an informative framework for the delay analysis of the worst case in static networks with static configurations.It also addresses errors and omissions in the description of the existing functionality.IEEE P802.1Qdd [60] specifies protocols, procedures, and managed objects for a resource allocation protocol (RAP) that uses the link-local registration protocol (LRP).It supports and provides backwards compatibility with the stream reservation and QoS capabilities, and controls and protocols specified in IEEE Std 802.1Q [43].RAP provides support for accurate latency calculation and reporting, which can use redundant paths established by other protocols and is not limited to bridged networks.IEEE P802.1Qdj [61] provides configuration enhancements for TSN.In this specification, it defines procedures, interfaces, and managed objects to enhance the three models of TSN configuration.It specifies enhancements to the user/network interface (UNI) to include new capabilities to support bridges and end stations in order to extend the configuration capability.It preserves the existing separation between configuration models and protocol specifications.IEEE P802.1ASdm [62] amends 802.1AS to specify the hot standby protocol, process and management objects that do not use the BMCA for the time-aware system.P802.1ASdm includes the function of converting the synchronization time of the two general precision time protocol (gPTP) domains into a synchronization time for the application programs, the function of directing the synchronization time of one gPTP domain to another gPTP domain and a mechanism to determine whether the gPTP domain has sufficient quality for hot standby. For the transmission redundancy, IEEE P802.1CBcv amends IEEE Std 802.1CB [38], which provides FRER extended stream identification functions, including process and management objects for adding new stream recognition functions. TSN Based on Wireless Channel Except the research points specified in published and ongoing standards, TSN based on wireless channel is attracting more attention, especially for Industry 4.0 since wireless connectivity can enable flexibility, scalability, and lower costs in next-generation factories. The 802.1AS [26] time synchronization based on wireless channel has been designed, while more designs are needed for the accuracy and timeliness of the synchronization.Currently, because a 5GS does not provide time synchronization between the user equipment (UE), radio access network (RAN), and user plane function (UPF), a new time-synchronization mechanism is required.In addition, a new buffering and scheduling mechanism is required for processing TSN traffic because guaranteeing a deterministic delay as well as a low delay is necessary.In addition, different from wire communications, the broadcast character of wireless communications will bring interference among nodes.Therefore, the reduction in robustness should be considered.Correspondingly, resource management, channel access, scheduling, configuration and security need to be future investigated.The WLAN can be used for TSN besides 5GS.However, because WLAN cannot detect collisions, it uses a random back-off counter to avoid collisions.The random back-off mechanism increases the transmission delay and causes high jitter.Thus, maintaining the delay within a particular range is challenging. For automotive applications, since Ethernet is the most promising in-vehicle backbone network technology, the TSN based on wireless communication may be future work.For the deterministic and low-latency transmission of inter-vehicle networks, TSN investigation is needed for wireless and 5G techniques. TSN Products Although many TSN products, from the physical layer to the application layer, were designed, most of them are used for the deterministic transmission of Industry 4.0.For automotive networks, some preliminary products were developed, most of which realized standards of AVB, such as 802.1AS, 802.1Qav and 802.1Qat.For TSN, some products realized the protocol stack for testing.More products with integrated TSN characters are needed, which can be used for automobiles to guarantee deterministic transmission for the critical traffic of intelligent driving. Conclusions In this paper, we presented an in-depth survey of time-sensitive networking (TSN) for intelligent driving.We introduced TSN-related standards specified by the Institute of Electrical and Electronics Engineers (IEEE) 802.3 work group (WG), IEEE 802.1 WG, Internet Engineering Task Force (IETF) and OPEN Alliance (OA) from the physical layer to the network layer to enable Ethernet to provide deterministic, low-latency and high bandwidth data transmission for emerging applications brought by intelligent driving.Furthermore, we revealed corresponding automotive products based on these TSN specifications.In addition, we analyzed a minimum set of specifications that should be considered to realize TSN functions for automotive applications, based on which we presented a demo setup.Based on our survey, we concluded the existing techniques of TSN, and identified corresponding solutions and proposed potential solutions to address these issues.We also gave some promising techniques and ongoing standards of TSN, including new designs of synchronization, configuration, robustness, resource allocation and the TSN based on the wireless channel, followed by a discussion of its feasibility for automotive applications.With the aid of this survey, researchers can obtain a quick understanding of the contents, progress and challenges of TSN on automotive networks.Furthermore, developers of TSN can draw lessons from solutions provided in this paper both in term of theory and practice. Figure 4 . Figure 4.A basic TSN protocol stack model of automobile networks. Figure 5 . Figure 5.A demo setup for some typical TSN functions. Figure 6 . Figure 6.Time offset between two nodes. Figure 7 . Figure 7. End-to-end delay of traffic transmission. Table 3 . Application of TSN standards to intelligent driving applications. Table 4 . Traffic classes and requirements of vehicular applications.
13,938.4
2023-07-22T00:00:00.000
[ "Engineering", "Computer Science" ]
Feed-forward true carrier extraction of high baud rate phase shift keyed signals using photonic modulation stripping and low-bandwidth electronics Retrieving the full information carried by phase shift keyed (PSK) data streams requires a reference local oscillator (LO). If the receiver utilizes digital signal processing (DSP), a free-running LO can be used, although several benefits can be derived from generating an optical LO that is locked in frequency and phase to the original signal carrier (which is unfortunately suppressed in the PSK data modulation process). Here, we present a new concept of carrier recovery. Using nonlinear optics, we strip the data modulation and derive an error signal proportional to the phase/frequency difference between a free running intradyne LO and the data-stripped signal. After extracting this frequency difference (using slow electronics), we frequency shift the free running LO by this amount, effectively obtaining a homodyne LO. The carrier is recovered to a precision of better than ±0.5 Hz and the method is tested by performing homodyne detection of a 20 Gbaud binary PSK signal. ©2011 Optical Society of America OCIS codes: (060.1660) Coherent communications; (070.4340) Nonlinear optical signal processing. References and links 1. M. G. Taylor, “Coherent detection method using DSP for demodulation of signal and subsequent equalization of propagation impairments,” IEEE Photon. Technol. Lett. 16(2), 674–676 (2004). 2. K. Kim, K. Croussore, X. Li, and G. Li, “All-optical carrier synchronization using a phase-sensitive oscillator,” IEEE Photon. Tech. Lett. 19(13), 987–989 (2007). 3. S. K. Ibrahim, S. Sygletos, R. Weerasuriya, and A. D. Ellis, “Novel carrier extraction scheme for phase modulated signals using feed-forward based modulation stripping,” European Conference on Optical Communications (ECOC), 19–23 Sept. 2010, Torino, Italy, paper We7A4. 4. A. Chiuchiarelli, M. J. Fice, E. Ciaramella, and A. J. Seeds, “Effective homodyne optical phase locking to PSK signal by means of 8b10b line coding,” Opt. Express 19(3), 1707–1712 (2011). 5. G.-W. Lu and T. Miyazaki, “Optical phase add-drop for format conversion between DQPSK and DPSK and its application in optical label switching systems,” IEEE Photon. Technol. Lett. 21(5), 322–324 (2009). 6. J.Kakande, A. Bogris, R. Slavík, F. Parmigiani, D. Syvridis, P. Petropoulos, and D.J. Richardson, “First demonstration of all-optical QPSK signal regeneration in a novel multi-format phase sensitive amplifier,” ECOC 2010 PD 3.3 (2010). Introduction Current high speed optical coherent receivers use intra or hetero-dyne coherent detection aided by DSP to retrieve a reference carrier for phase demodulation of carrier-suppressed PSK signals [1].However, for many applications, it may be advantageous, and in certain cases a requirement, to recover the signal carrier directly in the optical domain.For example, for homodyne all-optical regeneration, a reference local oscillator (LO) must be synthesized locally.Another example is the afore-mentioned coherent detection, in which it may be advantageous to pre-process the signal optically (e.g., to demultiplex it) and/or to perform homodyne coherent detection to reduce the electronic DSP processing demands.The urgent need for carrier extraction is evidenced by recent reports in the literature of several novel carrier recovery schemes [2][3][4].Generally, feed-back [2] or feed-forward methods [3] are used.Alternatively, the carrier can be transmitted in a separate polarization or frequency channel to the signal, or the data coding may be modified to leave some residual component of the carrier in the data spectrum [4].The feedback methods require short loop delays to achieve reasonable (> MHz) bandwidths, something that is ultimately limited by the physical layout of electronic and optical devices.The published feed-forward schemes are limited to processing signals with bandwidths less than that of electronics (e.g., in [3], 10 Gbaud signals are processed with >10 GHz electronics). Here, we present a novel method that is based on a feed-forward configuration and which generally requires electronics that is slower than the baud rate of the data signal.It consists of two stages.In the first stage, ultrafast four wave mixing (FWM) is used to down-convert the carrier variations to the baseband (<10 GHz).In the second stage, the carrier variations are processed electronically and transferred back into the optical domain via optical modulation. Principle of the proposed method As suggested above, we start with an Intradyne LO and 'measure' the instantaneous frequency difference between the Intradyne LO and the data carrier.Subsequently, we shift the Intradyne LO by this amount, obtaining a Homodyne LO (LO perfectly locked to the data carrier).To measure the instantaneous frequency difference with slow electronics, we first need to strip the (fast) data modulation off the incident data stream [5]. Modulation stripping [5] can in principle handle phase-encoded signal (PSK) of an arbitrary number of levels.For the sake of simplicity, we demonstrate it on binary PSK (BPSK) and quadruple PSK (QPSK), however from this extension to an arbitrary number of levels is straightforward to understand.First, the incident data signal is mixed via FWM with a continuous wave (CW) pump in a non-linear medium (e.g., a highly non-linear optical fiber, HNLF).For BPSK modulated data, momentum conservation requires that the first idler (Fig. 1) has phase: ( ) For QPSK, we need to consider cascaded FWM processes, in which the generated idlers interact with the original pump and data signals producing higher-order idlers.In QPSK, data is encoded using four logical levels 0, 2, ,3 2 ( ) It can easily be seen that for an M-level PSK, the (M-1) st idler is modulation stripped.Although the idlers of interest (e.g., 1 st for BPSK and 3 rd for QPSK) are data modulation free, they are not utilizable as a homodyne LO for the following reasons.First, they possess phase fluctuations originating from the pump laser which itself has a finite linewidth.Secondly, the idler wave does not follow the signal carrier itself, but its (M-1) th multiple.Finally, the idler wave is at different wavelength, which is undesirable for most applications (e.g., it cannot be used for homodyne detection). Our method includes further steps to mitigate all the three above-mentioned issues.For the sake of simplicity, we will explain it in the context of BPSK modulated signals, however again from this extension to higher modulation formats is straightforward. First, we perform the modulation stripping as described above; however in parallel we perform a second FWM mixing process between a second CW signal (Intradyne LO) with a component of the same Pump.The Intradyne LO has its wavelength reasonably close to that of the data carrier (e.g.<10 GHz away).The criteria by which we define 'reasonably close' will become clear later.The frequency difference between the Intradyne LO and the data carrier is denoted as Ω beat /2 in Fig. 2. The modulation stripping produces an idler denoted as 'Idler', while the second FWM process produces another idler denoted as 'LO Idler' in Fig. 2. Mathematically, this first stage generates two idlers with frequencies given by: 2 ; 2 Subsequently, we filter the two idlers and beat them together at a photodetector, Fig. 3, obtaining radio frequency (RF) beating of the two idlers at frequency beat Ω that is twice that of the frequency difference between the Intradyne LO and the data carrier, see Fig. 2, as follows from the basic frequency matching condition of the FWM.Using Eq. ( 4) we obtain: which does no longer depends on the Pump frequency.Considering the RF beat frequency Ω beat can readily be divided by two, we get: Thus, the carrier (Homodyne LO) can be straightforwardly obtained by shifting the Intradyne LO frequency by RF frequency Ω beat /2.This could be done using an acousto-optic modulator, single-sideband modulator, or a standard phase modulator followed by narrow band filtering of the modulation sideband (e.g., by injection locking of a semiconductor laser), see Fig. 4. Frequency division at 10-GHz speeds can be easily carried out using a digital RF frequency divider.Now, we can understand what we mean by the term 'reasonably close' -Ω beat needs to be sufficiently small to be detectable by a photodetector and the frequency shifter needs to be able to apply a shift of Ω beat /2.On the other hand, it must be big enough to ensure that the difference between the Intradyne LO and data carrier is always positive or always negative, as the beat detector in Fig. 3 can detect only the magnitude of Ω beat and not its sign.A key feature is that the data modulation speed does not impose any limit on Ω beat meaning that very high baud rates can be processed despite Ω beat being relatively small. RF frequency divider Single-sideband modulator Intradyne LO In currently installed telecom systems, the data carrier can fluctuate up to by ± 1GHz and the above conditions thus dictate Ω beat > 4 GHz.Using a digital frequency divider with 12 GHz bandwidth and a single-sideband modulator with 6-GHz bandwidth, we would get 6 GHz > Ω beat > 4 GHz.However, lower bandwith processing should be possible simply by actively controlling the Intradyne LO frequency so that it tracks the slowly drifting data carrier frequency in which case the minimum bandwidth limitation would be determined by the linewidths of the carrier and the Intradyne LO lasers.Assuming a practical value of 1 MHz for the two laser linewidths, electronics operating in the hundreds of MHz range should be more than sufficient in this instance. The method can easily be extended to higher modulation formats by using frequency division by M, which can be done with commercially-available digital frequency dividers. Set-up We tested the method with BPSK data modulation.For a practical implementation, several details had to be addressed to allow efficient operation of the method.First, the two FWM processes cannot be performed simultaneously (same fiber, same polarization, same propagation direction), as the FWM product of 'Pump + Intradyne LO + Data' that carries the original data would be generated in the same frequency region as the two idlers of interest.Performing these two FWM processes in two different HNLF would, however, lead to different phase variations of the two generated idlers as a result of the different acoustic pickup in the two HNLFs, which would generate phase variations in the (sub)kHz regime (where acoustic waves are present).To avoid this, we used the same HNLF, but operated it bidirectionally with Pump + Intradyne LO launched from the opposite side with respect to the Pump + Data.Another issue to address originates from the fact that Idler generated from the Data signal has strong amplitude variations due to the amplitude variations of the Data stream, as the Data signal was generated using an amplitude modulator (symmetrically driven around the null point).This strongly disrupts the frequency division process.To eliminate such problems we performed amplitude regeneration of the idler [3] via injection locking of a semiconductor laser (Eblana Photonics, Inc, Ireland).An alternative option would be to perform balanced detection.The set-up built is shown in detail in Fig. 5 with some key optical/RF spectral characteristics shown in Fig. 6.Gbaud (red dotted) data rates together with the 'complementary signal' spectra (blue dash).(b) RF spectrum of the detected beat signal at 10 GHz obtained for 10 Gbaud (black solid) and 56 Gbauds (red dotted) data rates, respectively. Results First, we switched off the data modulator and characterized how the free running CW laser (Intradyne LO) could be phase synchronized with the input optical wave (200 kHz linewidth laser, Eblana Photonics, Ireland).For this characterization, we observed the interference pattern between the original input signal and the output signal of the carrier recovery unit, Fig. 7a.Here, we clearly see that the two signals interfere with slowly varying relative phase (on a time scale of seconds) due to thermal drift in the fibers.This experiment was further complemented by analyzing the RF spectrum of this interference (to shift it from zero frequency, a 140-kHz phase dither was introduced at the input of the original signal), Fig. 7b. Here, we see that the beat between the two signals is narrower than 1 Hz (resolution limited by our RF spectrum analyzer) confirming the previous result that the carrier was recovered to better than 1 Hz precision (more than five orders of magnitude below the natural linewidth of the data laser).Following the static characterization, we tested the set-up using BPSK modulated data at various data rates (up to 56 Gbaud) -both straight from the transmitter and also in the presence of high residual dispersion (corresponding to 50 km of SMF-28 fiber).By monitoring the optical and electrical spectra at various points in the set-up we confirmed that the scheme worked properly (e.g., the digital frequency divider had sufficient signal and signal-to-noise ratio to operate properly, the two slave lasers were reliably injection-locked, etc.).Following these checks, we performed homodyne detection at 20 Gbaud(limited in speed by our real time oscilloscope).Constellation diagrams were plotted without any intermediate frequency or phase-error estimation; the only electronic post-processing was digital dispersion compensation.The results are shown in Fig. 8a and Fig. 8b.Here, we see that the data was fully recovered with no intermediate frequency present, even after 50 km of SMF-28 dispersive propagation (equivalent effect to 200 km for 10 Gbaud data).For comparison, the constellation obtained with a narrow-linewidth (kHz-range) free running LO tuned carefully to obtain a low intermediate frequency is also shown in Fig. 8c. Conclusions We present and demonstrate a novel scheme for carrier recovery of phase-encoded signals capable of recovering the carrier at its original frequency with a precision better than 1 Hz.The processing bandwidth is virtually unlimited as it is based on an ultrafast FWM process.In our demonstration, the carrier frequency of a semiconductor laser with a linewidth of 200 kHz is successfully recovered.We show results for 20 Gbaud rate being limited only by our homodyne receiver.We also demonstrated the ability to recover the carrier from data significantly impaired by dispersion -data transmitted through 50 km of SMF-28 fiber at 20 Gbaud.The scheme can be straightforwardly modified to enable carrier recovery from higher modulation format signals, e.g., QPSK. #Fig. 1 . Fig. 1.Modulation stripping -principle shown for the example of BPSK and QPSK modulation formats. Fig. 2 . Fig. 2. First step of the proposed method: Modulation stripping (a) is performed simultaneously to a similar process in which the data signal is replaced by an Intradyne LO (b). Fig. 3 . Fig. 3. Beating of the two idlers at a photodetector produces a beat signal at beat Ω . Fig. 4 . Fig.4.Carrier recovery using RF frequency divider and a single-sideband modulator as an example of an optical frequency shifter. Fig. 6 . Fig.6.(a) Spectra measured at the output of the Ge-HNLF for 10 Gbaud (black solid) and 56 Gbaud (red dotted) data rates together with the 'complementary signal' spectra (blue dash).(b) RF spectrum of the detected beat signal at 10 GHz obtained for 10 Gbaud (black solid) and 56 Gbauds (red dotted) data rates, respectively. Fig. 7 .Fig. 8 . Fig. 7. Set-up (upper panels) and results (lower panels) of the static measurement -homodyne in temporal domain (a) and heterodyne in the RF frequency domain (b).a) b) c)
3,444
2011-12-19T00:00:00.000
[ "Engineering", "Physics" ]
Chemical Composition, Antifungal and Antibiofilm Activities of the Essential Oil of Mentha piperita L. Variations in quantity and quality of essential oil (EO) from the aerial parts of cultivated Mentha piperita were determined. The EO of air-dried sample was obtained by a hydrodistillation method and analyzed by a gas chromatography/mass spectrometry (GC/MS). The antifungal activity of the EO was investigated by broth microdilution methods as recommended by Clinical and Laboratory Standards Institute. A biofilm formation inhibition was measured by using an XTT reduction assay. Menthol (53.28%) was the major compound of the EO followed by Menthyl acetate (15.1%) and Menthofuran (11.18%). The EO exhibited strong antifungal activities against the examined fungi at concentrations ranging from 0.12 to 8.0 μL/mL. In addition, the EO inhibited the biofilm formation of Candida albicans and C. dubliniensis at concentrations up to 2 μL/mL. Considering the wide range of the antifungal activities of the examined EO, it might be potentially used in the management of fungal infections or in the extension of the shelf life of food products. Introduction Medicinal plants have been used for centuries in traditional medicine because of their therapeutic value. Mint species have been exploited by man for more than two thousand years. Peppermint itself has been used for more than 250 years [1]. Mentha piperita, (family Lamiaceae) is a species found in Iran and many parts of the world which has an economical value for its flavoring, odor, and therapeutic properties in foods and cosmetic industrial products. In addition, the leaves and flowers of M. piperita have medicinal properties [2,3]. Essential oils are valuable natural products used as raw materials in many fields including perfumes, cosmetics, aromatherapy, phototherapy, spices, and nutrition. Peppermint (M. piperita) oil is one of the most popular and widely used essential oils, mostly because of its main components, Menthol, and menthone [4]. Previous studies have shown antiviral [5], antibacterial [6,7], antifungal [6,[8][9][10], antibiofilm formation [11][12][13], radioprotective [14], antioedema [15], analgesic [16], and antioxidant activities [6] of the EO and methanolic extracts of herbal parts and callus cultures of M. piperita. In addition, M. piperita EO has been shown to cause inhibitory effects against radial fungal growth and aflatoxin production by Aspergillus species [17]. In the past two decades, the emergence of resistance to various antifungal drugs has accelerated dramatically. Azole-resistant Candida and Aspergillus species are the top pathogens responsible for nosocomial or food-borne infections [18,19]. In addition, the formation of biofilms by Candida species have raised concerns due to their increased resistance to antifungal therapy and protects the microbial cells within biofilms from the host immune defenses [20][21][22][23]. An alternative approach to overcome antibiotic resistance might be using natural products and phytochemicals. It has also been shown that some plant extracts efficiently inhibit the biofilm formation of C. albicans [24]. Moreover, EOs especially with known antibacterial effects have the potential to be used in food industry as preservatives and to increase the shelf life of products. Therefore, determining the antimicrobial properties of EOs might help to overcome microorganism resistance to antibiotics and prevent food spoilage. The chemical composition of aromatic plants depends largely on the individual genetic variability and different plant parts [25][26][27]. The presence and concentration of certain chemical constituents of EOs also fluctuate according to the season, climatic condition, and site of plant growth [25,26]. The goal of this study was to investigate the chemical composition and in vitro antifungal and antibiofilm activities of essential oils of the leaves of M. piperita collected in the region of Fars from Iran. EO Preparation. At full flowering stage, the aerial parts of the M. piperita were hydrodistillated for 2.5 h, using an all-glass Clevenger-type apparatus, according to the method outlined by the British Pharmacopoeia. The sample oils were dried over anhydrous sodium sulfate and stored in sealed vials at 4 • C before gas chromatography and gas chromatography-mass spectrometry (GC-MS) analysis. EO Analysis by Gas Chromatography-Mass Spectrometry. The EO was analyzed by GC-MS. The analysis was carried out on a Thermoquest-Finnigan Trace GC-MS instrument equipped with a DB-5 fused silica column (60 m × 0.25 mm i.d., film thickness 0.25 mm). The oven temperature was programmed to increase from 60 • C to 250 • C at a rate of 4 • C/min and finally held for 10 min; transfer line temperature was 250 • C. Helium was used as the carrier gas at a flow rate of 1.1 mL/min with a split ratio equal to 1/50. The quadrupole mass spectrometer was scanned over the 35-465 amu with an ionizing voltage of 70 eV and an ionization current of 150 mA. GC-flame ionization detector (FID) analysis of the oil was conducted using a Thermoquest-Finnigan instrument equipped with a DB-5 fused silica column (60 m × 0.25 mm i.d., film thickness 0.25 mm). Nitrogen was used as the carrier gas at the constant flow of 1.1 mL/min; the split ratio was the same as that for GC-MS. The oven temperature was raised from 60 • C to 250 • C at a rate of 4 • C/min and held for 10 min. The injector and detector (FID) temperatures were kept at 250 • C and 280 • C, respectively. Semiquantitative data were obtained from FID area percentages without the use of correction factors. Identification of EO Components. Retention indexes (RIs) were calculated by using retention times of n-alkanes (C6-C24) that were injected after the oil at the same temperature and conditions. The compounds were identified by comparing their RI with those reported in the literature, and their mass spectrum was compared with those reported in Wiley Library [28]. [29,30]. The antifungal susceptibility of clinical isolates of the tested fungi against fluconazole was examined by microdilution and disk diffusion methods [31,32]. Determination of Minimum Inhibitory Concentration. Minimal inhibitory concentrations of the EO against standard and clinical species of the fungi were determined by the broth microdilution method as recommended by the Clinical and Laboratory Standards Institute (CLSI), with some modifications [31,32]. Briefly, the RPMI-1640 (with L-glutamine and phenol red, without bicarbonate) (Sigma, USA) was prepared and buffered at pH 7.0 with 0.165 mol 3-(N-morpholino)propane sulfonic acid (MOPS) (Sigma-Aldrich, Steinheim, Germany). Serial dilutions of the EO (0.06 to 16 μL/mL) were prepared in 96-well microtiter trays using RPMI-1640 media (Sigma, St. Louis, MO, USA) buffered with MOPS (Sigma, St. Louis, MO, USA). Double dilutions of Fluconazole were also prepared for each of the tested fungi with the final concentration of 0.25 to 128 μg/mL. Stock inoculums were prepared by suspending three colonies of the examined yeast in 5 mL sterile 0.85% NaCl and adjusting the turbidity of the inoculums to 0.5 McFarland standard at 630 nm wavelength (this yields stock suspension of 1-5 × 10 6 cells/mL). For moulds (Aspergillus spp.), conidia were recovered from the 7-day-old cultures grown on potato dextrose agar by a wetting loop with Tween 20. The collected conidia were transferred in sterile saline and their turbidity was adjusted to optical density of 0.09 to 0.11 that yields 0.4-5 × 10 6 conidia/mL. Working suspension was prepared by making a 1/50 and 1/1000 dilution with RPMI of the stock suspension for moulds and yeasts, respectively. After the addition of 0.1 mL of the inoculums to the wells, the trays were incubated at 30 • C for 24-48 h in a humid atmosphere. 200 μL of the uninoculated medium was included as a sterility control (blank). In addition, growth controls (medium with inoculums and 5% (v/v) without the EO or fluconazole) were also included. The growth in each well was compared with that of the growth control well. MICs were visually determined and defined as the lowest concentration of the EO that produced no visible growth. Each experiment was performed in triplicate. In addition, minimum fungicidal concentrations (MFCs) of all the examined agents were also determined by culturing 10 μL from the wells showing no visible growth onto SDA plates. MFCs were of the lowest concentration that showed either no growth or fewer than 4 colonies, which corresponded to 98% killing activity of the initial inoculums. MFCs were determined as the lowest concentration yielding no more than 4 colonies, which corresponds to a mortality of 98% of the fungi in the initial inoculums. . Then, they were resuspended in RPMI 1640 supplemented with L-glutamine (Gibco) and buffered with morpholinopropanesulfonic acid (MOPS) and the cell densities were adjusted to 1.0 × 10 6 cells/mL after counting with a hemocytometer. Serial dilution of the EOs (0.015-8 μL/mL) in RPMI 1640 was prepared in a presterilized, polystyrene, flat-bottom, 96-well microtiter plates (Nunc). After the addition of 0.1 mL of the yeast inoculums to the wells, the trays were incubated at 30 • C for 24-48 h in a humid atmosphere. 200 μL of the uninoculated medium was included as a negative control (blank). In addition, RPMI with yeasts but without the EOs served as positive controls. Biofilm Inhibition Assay. A semiquantitative measure of biofilm formation was assayed using a 2, 3-bis(2-methoxy-4-nitro-5-sulfo-phenyl)-2H-tetrazolium-5-carbox-anilide (XTT) reduction assay. XTT (Sigma Chemical Co.) was prepared as a saturated solution at a concentration of 0.5 mg/mL in Ringer's lactate. This solution was filtersterilized through a 0.22 μm-pore-size filter, divided into aliquots and then stored at −70 • C. Prior to each assay, an aliquot of the XTT stock solution was thawed and treated with menadione sodium bisulfite (10 mM prepared in Distilled Water; Sigma Chemical Co.) to obtain a final concentration of 1 μM of menadione. A 100 μL aliquot of XTT menadione was then added to each prewashed wells. The plates were then incubated in dark for 2 hours at 37 • C, and the colorimetric change at 490 nm (a reflection of the metabolic activity of the biofilm) was measured with a microtiter plate reader (Titertekplus-MS2 reader, UK) [10]. Table 2. The EO inhibited the growth of all of the tested yeasts at concentrations of 0.12-4 μL/mL. Furthermore, the EO exhibited fungicidal activity (MFC) for all of the above-mentioned yeasts at concentrations ranging from 1 to 8 μL/mL. No significant differences in inhibitory concentrations were found between azole-resistant and -susceptible strains. In addition, the EO inhibited the growth and killed the standard strain of Cryptococcus neoformance at the concentration of 4 μL/mL. All of the Aspergillus standard strains were susceptible to M. piperita EO at concentrations of 0.5-4 μL/mL ( Table 3). As shown in Table 4, the EO completely inhibited the biofilm formation of C. albicans and C. dubliniensis at concentrations of 1 μL/mL and 2 μL/mL, respectively. [4,7,33], we identified Menthol as one of the main constituents of the EOs. The higher concentration of Menthol in this study as compared to some of those of previous reports [4,34] may reflect variations due to geographical location from which the plants were collected. (1) 1.0 2.0 C. krusei (1) 0.5 1.0 C. dubliniensis (5) 2.4 (2-4) 6.0 (4-8) C. parapsilosis (1) 4. In this study, the EOs exhibited fungistatic and fungicidal activities against both of the standard and clinical strains of Candida species at concentrations ranging from 0.5 μL/mL to 8 μL/mL, which can be best compared to the previous investigations [6,8,9,33]. One of the encapsulated yeasts, C. neoformance, is a well-known primarily opportunistic pathogen which produces chronic and life-threatening meningitis. According to the findings of this study, the examined oils killed the standard strain of C. neoformance at concentration of 4 μL/mL. Similar to the previous studies [10,17] the EO of M. piperita exhibited a strong anti-Aspergillus activity with MIC values ranging from 0.5 to 4 μL/mL. MFCs of the EO against the tested fungi were almost similar or two times greater than those of their corresponding MICs. Results and Discussion Since the EOs exhibited similar antifungal effect against the tested azole-resistant and azole-susceptible strains, it could be assumed that the mechanism of the action of the EOs is different with those of the above-mentioned antifungal drug. One of the main characteristics of EOs is their hydrophobicity, which enables their incorporation into the cell membrane. The tested EO was rich in Menthol. It has been shown that this phenolic monoterpene has a hydroxyl group around the phenolic ring and exhibits its antimicrobial activity through the disruption of the cytoplasmic membrane [35,36]. Biofilm formation by Candida species is a phenomenon which helps the survival, pathogenesis, and drug resistance. The most commonly Candida species associated with biofilm formation is C. albicans. In the present study, the formation of biofilm was inhibited completely at a concentration of up to 2 μL/mL in a dose-dependent manner, which was comparable to the study of Agarwal et al. [11]. Conclusion As the industries tend to reduce the use of chemical preservatives in their products, EO of M. piperita with potential active antimicrobial properties might be considered as a natural source for the maintenance or extension of the shelf life of products. In addition, delectable taste of the EO at the concentrations needed for antimicrobial properties was a bonus to its antimicrobial effects. On the other hand, these EOs might also be considered for developing products for controlling fungal infections. As these tests have all been done in vitro, the next step may be further investigations in animal models to see if infection can be inhibited by the EO.
3,080
2012-12-13T00:00:00.000
[ "Biology", "Agricultural And Food Sciences" ]
Socioeconomic inequalities of outpatient and inpatient service utilization in China: personal and regional perspectives Background China’s health system has shown remarkable progress in health provision and health outcomes in recent decades, however inequality in health care utilization persists and poses a serious social problem. While government pro-poor health policies addressed affordability as the major obstacle to equality in health care access, this policy direction deserves further examination. Our study examines the issue of health care inequalities in China, analyzing both regional and individual socioeconomic factors associated with the inequality, and provides evidence to improve governmental health policies. Methods The China Health and Nutrition Survey (CHNS) 1991–2011 data were used to analyze the inequality of health care utilization. The random effects logistic regression technique was used to model health care utilization as the dependent variable, and income and regional location as the independent variables, controlling for individuals’ age, gender, marital status, education, health insurance, body mass index (BMI), and period variations. The dynamic trend of 1991–2011 regional disparities was estimated using an interaction term between the regional group dummy and the wave dummy. Results The probability of using outpatient service and inpatient services during the previous 4 weeks was 8.6 and 1.1% respectively. Compared to urban residents, suburban (OR: 0.802, 95% CI: 0.720–0.893), town (OR: 0.722, 95% CI: 0.648–0.804), rich (OR: 0.728, 95% CI: 0.656–0.807) and poor village (OR: 0.778, 95% CI: 0.698–0.868) residents were less likely to use outpatient service; and rich (OR: 0.609, 95% CI: 0.472–0.785) and poor village (OR: 0.752, 95% CI: 0. 576–0.983) residents were less likely to use inpatient health care. But the differences between income groups were not significant, except the differences between top and bottom income group in outpatient service use. Conclusion Regional location was a more important factor than individual characteristics in determining access to health care. Besides demand-side subsidies, Chinese policy makers should pay enhanced attention to health care resource allocation to address inequity in health care access. Background Since the 1978 reform and opening up period, China has experienced enormous demographic and socioeconomic changes. China's gross domestic product (GDP) per capita increased from $US1145 in 2000 to $US8016 in 2015 [1], and China has also made remarkable progress in the development of its health care system. For example, practicing (assistant) physicians per thousand population grew from 1.68 in 2000 to 2.12 in 2014 and the life expectancy of the Chinese population increased by 4 years [2]. However, rapid growth and longevity has brought increased income inequality, with the individual income Gini coefficient rising from 0.401 in 2000 to 0.462 in 2015. Mirroring China's income inequalities, the gap between the rich and the poor in access to health care has also widened. There is strong evidence of pro-rich inequality in China health system [3,4]. By 2014, the average yearly health care expenditure was $US189.05 among urban residents, but only $US109.22 among rural residents. Of course, inequality of health care utilization is not unique to China, but also exits in many other developing [5,6] and developed countries [7][8][9][10]. Studies conducted in a number of developed and developing countries also found that rich residents have a higher probability of obtaining health care when sick than the poor. Therefore, improving health care equity and closing the gap between the rich and poor in accessing health care have become priorities for health systems in many countries and organizations [11][12][13]. Equal access to qualified health care has two major components, affordability and availability [14]. Affordability and availability of health care services are two sides of the same coin when seen from the individual and regional perspectives. Affordability of health care, mainly related to household income, health insurance reimbursement rates and other income-related factors, has received the most attention in assessing health systems and improving their performance. The availability of health care describes access from the regional level, which is linked mainly to health care resource allocations, governmental funding and government policies. Different causes of inequity should be tackled with different corresponding compensation strategies. Enhancing the affordability mostly refers to demand-side financing strategies such as pro-poor subsidies and insurance for low-income residents schemes [15]. But regional factors that compromise the availability of health care services should be corrected with supply-side compensation, like grants for health care infrastructure and salary subsidies for health workers. In recent years, demand-side subsidies have been extensively applied to address health care access. Researchers and policy makers argue that demand-side financing is not only better at targeting subsidies to the poor, but by linking subsidies with output, they also provide the right incentives for efficiency [16,17]. Supply-side financing strategies have been criticized for their inefficiencies [16,18]. Recent research has resulted in some unexpected findings that are incompatible with the above supply-side versus demand-side intuitions concerning inequalities. There is no unanimity in the research on health care equality that shows that affordability is a more important cause of (in)equality of service utilization than availability. Feng et al. found that regional factors were more significant indicators of hospital births in China than individual income [19] and Li et al. showed that urbanrural and core-periphery gaps were significant determinants in health care access in Henan province [20]. Van Doorslaer et al. found no evidence of income-related inequity in GP visits in European countries [8] and M.Makinen et al. found that in developing countries, the richer households did not devote a consistently higher percentage of their consumption expenditures to health care [5]. Without careful examination, demand and supply-side intuitions concerning health care inequalities can result in misguided policy-making. Health care financing strategies employed in China and other countries require careful analysis. This study analyzes the personal and regional socioeconomic factors associated with inequalities in health care utilization in China and provides evidence and recommendations for improving governmental health policy financing. Data sources Data were obtained from the China Health and Nutrition Survey (CHNS), which utilizes a multistage, random cluster sampling strategy to collect longitudinal data across 228 communities within 9 provinces of China. A detailed description of the survey design and procedures are available in Zhang B et al. [21]. We accessed eight waves of CHNS surveys conducted between 1991 and 2011, with the final sample comprising 73,110 observations after excluding observations with missing data. Measures/variables The main objective of this study is to compare the impact of individual factors and regional factors on the inequality of health care utilization. The dependent variable measured whether a resident utilized outpatient or inpatient services during the past 4 weeks, and the two key independent variables were the individual's personal income quintile and the region of residence. Sampled individuals were divided into five income groups, according to their income quintile (top to bottom), and sampled communities were divided into five regional categories, comprising urban, suburban, town, rich village and poor village. Rich versus poor villages were categorized by their per capita income. Following Andersen's behavioral model of health care utilization, comprising predisposing characteristics (such as demographics, and position within the social structure), enabling characteristics (such as economic status), and need based characteristics (perception of need for health services) [22], we controlled for age, gender, marital status, education level, health insurance and body mass index (BMI). Although self-report health status (SRH) is a widely used proxy variable for health needs [23][24][25], SRH did not appear in all of the CHNS questionnaires. Since BMI is associated with SRH [26], health-related quality of life [27,28] and mortality risk [29], BMI was used to proxy health status. In addition, wave dummies for each of 8 collection points between 1991 and 2011 were added into the model to capture the period effects. Table 1 presents definitions for all variables in the analysis. Statistical analysis Data analysis was performed by using the STATA 14.0 (College Station, Texas USA) and carried out by descriptive statistics and the random-effects logit model. Descriptive statistics for utilizing outpatient and inpatient service were reported as counts and proportions, with corresponding chi-square and the p-values, to examine whether there were statistically significant differences between subgroups. Second, we adopted the random effects logit model using panel data to investigate regional disparities. Panel data models can offset potential problems associated with unobserved heterogeneity that may induce inconsistent estimators in crosssectional models. To get consistent and efficient estimators, panel data analysis involves both fixed effects models and the random-effects models. Some variables in our models do not vary over time, such as gender, occupation, and regional groups, which would be omitted in fixed effects models. Since these variables are important factors explaining healthcare utilization, randomeffects models were employed to retain those variables in our model. The model was specified as: where P it represented the probability of utilization of outpatient and inpatient service of individual i at period t; RG it indicated region group; IG it represented the income group; β 0 was the intercept; coefficients β 1 and β 2 represented region disparities and income disparities. Further, x kit were control variables, such as age, gender, marital status, education level and so on, where α k is k th regression coefficient; μ i was the random effect representing the effect of the i individual. In addition to exploring the dynamic trend of regional disparities from 1991 to 2011, an interaction term between regional group dummies and wave dummies was added into the model. Table 2 displays the descriptive statistics for the variables in the entire sample as well as the outpatient and inpatient samples. During the previous 4 weeks, the probability of using outpatient service was 8.6% and using inpatient services was 1.1%. Table 2 shows that over the period 1991-2011, the probability of using outpatient services and inpatient service utilization first declined and then increased. The probability of using outpatient and inpatient services were significantly (p < 0.001) different across regional groups. Urban residents had the highest outpatient and inpatient service utilization, while individuals who lived in rich villages had the lowest probability. The results in Table 2 show that the rate of clinic visits was significantly (p < 0.001) different across income groups. The bottom-income group was more likely to use the outpatient service, while the middleincome group was less likely to be outpatients. But, the rates of hospitalization did not vary significantly by income groups. The results also indicated that outpatient and inpatient service utilization were significantly different across all the control variables, except gender. Figures 1 and 2 shown outpatient and inpatient service use among income and regional groups between 1991 and 2011. As shown in the Fig. 1, the disparity in the outpatient rate between income groups was very small before 2004, reaching a minimum in 2000, but increased after 2004. The disparity in the outpatient rate between regional groups decreased before 2006, then began to widen. The two figures demonstrate that the disparity of inpatient use was larger than outpatient use, and the disparity between regional groups was larger than between income groups in most years. Results Results of the random effects regressions for income and regional disparities in health care utilization are presented in Table 3, with region and income groups highlighted. As showed in the first column, after controlling for confounding variables, suburban (odds ratio (OR) =0.802, 95% confidence interval (CI): 0.720-0.893), town (OR: 0.722, 95% CI: 0.648-0.804), rich (OR: 0.728, 95% CI: 0.656-0.807) and poor village (OR: 0.778, 95% CI: 0.698-0.868) residents were less likely to use outpatient services, but the differences between income groups were not significant, except the differences between top and bottom income group in outpatient service use (OR: 1.134, 95% CI: 1.021-1.258). The results also show that outpatient service utilization was more likely to occur for those in old age, female and married groups, and was less likely to occur for those with high education levels and from the high BMI group. For inpatient use (column 3 in Table 3), significant differences were observed for rich (OR: 0.609, 95% CI: 0.472-0.785) and poor village residents (OR: 0.752, 95% CI: 0. 576-0.983), but there was no significant difference between income groups. Older age and health insurance increased the probability of hospitalization. When we excluded regional groups in model 2 (columns 2 and 4), the results remained roughly the same. Our results show that the inequalities in healthcare utilization were fundamentally caused by regional, not personal income, disparities. Table 4 and Fig. 3 present the dynamic trends of regional disparities in health utilization from 1991 to 2011. The rate of outpatient utilization increased before 2004, and then declined. In urban areas, the rate of increase was much greater than other regions before 2004, and the decline more rapidly from 2004 to 2009. The rate of inpatient health care utilization presented a fluctuating decreasing trend. The rate of decrease was much less in the urban region and much greater in rich villages. Discussion Compared to urban residents, we found suburban, town, rich and poor village residents were less likely to use outpatient services; and rich and poor village residents were less likely to use inpatient health care. Differences between income groups were not significant, except the differences between top and bottom income group in outpatient service use. In the random effects logit models with wave-region interactions, we found that China was making progress in increasing health care resource allocation and improving accessibility. Although the gap between urban and rural regions was closing, the disparity among regions remained significant. Largely consistent with existing studies, the major determinants of inequality of health care utilization in our study were age, gender, BMI, education, marriage status and health insurance, where the last three were not need-related factors.. Elwell-Sutton found these non- need related factors made the largest pro-rich contributions to health case use [30]. However, our results show that income was not as prominent a factor in health care inequality as in previous studies [4,30]. While health status proxies health need, and self-report health status (SRH) is a widely used proxy variable for health need [23][24][25], SRH did not appeared in all of the CHNS questionnaires. CHNS provided consistent data on BMI, and BMI is associated with SRH [26], health-related quality of life [19,[31][32][33][34] and mortality risk [29]. Used as a proxy variable of health status, we found BMI had a significant association with outpatient utilization. Regional factors had a more important impact on health care utilization than individual income. That inequality between socioeconomic regions is more pronounced than between individuals is also true in other countries [19,[31][32][33][34]. Van Doorslaer et al. found that location of residence contributed to the inequality of health service utilization in Europe and the US [35]. Brezzi et al. also showed that in addition to individual factors, the characteristics of the region where people live, such as the average skill endowment or employment rate, had a significant impact on the probability of unmet medical needs in selected OECD countries [33]. Devaux illustrated that the utilization of cancer screening services largely depended on the availability of national public screening programs, which varied by region across selected OECD countries [10]. In common with Chinese regional health inequalities [36], regional inequality also occurs in national development [37], economic growth [38], income levels [39] and education [40]. Health care accessibility in remote rural regions lags behind urban regions for several reasons. First, practicing (assistant) physicians per thousand population in rural countries (1.51) were less than half of those in urban cities (3.54) in 2014 [1]. A similar pattern can also be observed in the urban-rural allocation of nurses. Second, geographical factors also contribute to the availability of health care. Zhang et al. found that residents whose houses' were more than 5 km from the nearest health facilities were less likely to utilize health care services than those whose house was less than 5 km from health facilities [4]. Third, government financial support is highly dependent on the local economy. The central government only contributes a limited share of the financial inputs into health care facilities, so rich provinces invest more in health than poorer provinces. Finally, the uneven quality and accessibility to social resources impacts on health care quality.Social resources including education, public transport and commerce, are vital factors that attract human resource and funding [41]. Health care disparity is one of the demonstrated consequences of socioeconomic inequality [36]. Two turning points were identified in our trend analysis. From 1993 to 1997, disparity of inpatient health care utilizations among regions dramatically increased, with only urban areas displaying an increasing utilization trend, while other regions, especially villages, decreased rapidly (see Fig. 3). In China's health care reform history, commercialization of public health care was encouraged by the 14 th Central Committee of the Chinese Communist Party held in 1992 [42], including profit making, diversification of services and cost recovery. As shown in our results, the commercialization side effects were apparent with increasing inequality in access when health care facilities closed down or were sold to private [43] and 97.5% enrolment rate by 2011 [44]. Pre-NCMS, about 80% of the rural residents was not covered by any form of health insurance [43]. As the results of our study suggest, NCMS significantly improved health care access in rural areas. Besides demand-side subsidies, policy makers should pay more attention to the equity of health care resource allocation. Governments have mainly focused on explicit pro-poor health policies to correct the inequality of health care by enhancing the affordability of access to health care, such as targeted health sector subsidies for the poor [45] and community-based health insurance [46]. Although health insurance is a key factor in promoting health facility access, our study showed that the gap between high and low-income individuals was nearly closed and income factors lost their significance after adjusting for other impact factors. There are some existing supply-side schemes in China. The most important one is governmental financial reimbursement of capital construction and equipment purchase. However, barriers imped their implementation. First, a variety of pro-poor demand-side subsidies co-exist within four ministries (Health, Social Security, Civil Affairs and Finance), but only two ministries (Health and Finance) are responsible for supply-side subsidies. Second, the amounts of financial inputs into supply-side schemes are usually highly related to local government revenue, which means differential health care spending between rich and poor local governments will see health care gaps between regions perpetuated. Our findings emphasize supply-side inequality in health care utilization. While income-related inequalities contributed to access to health care facilities, the importance of regional disparities in health care access has been underestimated. We recommend monitoring supply-side factors in health policies. Based on our findings, more hospitals, clinics, physicians and nurses should be allocated to remote rural areas to tackle health care facility availability. If availability is the key bar to health care access, then additional funding to enhance affordability will not significantly improve health care access. China's 1992 health care reforms that 'marketized' hospitals provides a lesson that reminds us how availability affects the health care utilization, regardless of the affordability. This study has the following limitations. Relying on secondary data, some measures of the dimensions of access to health care, such as affordability, and some potential confounding factors, are missing. Second, all data were based on self-reporting, which might lead to recall and information bias. Thirdly, these data are collected before 2012. External validity is nuanced since health care reform strategies are evolving and new regulations have been launched during the last several years. Lastly, as restricted by secondary data, health care utilization based on need and demand cannot be easily divided, so the results should be interpreted with care. Conclusion We found that regional factors were a more important determinant of inequalities of health care utilization than individual, especially income, factors. Second, availability of services was a more prominent issue in China than affordability. While being cognizant of issues of demand-side subsidies, policy makers should pay increased attention to inequalities in health care utilization arising from resource allocation issues.
4,629.8
2017-12-04T00:00:00.000
[ "Economics", "Sociology" ]
Repair of Nitric Oxide-damaged DNA in β-Cells Requires JNK-dependent GADD45α Expression* Proinflammatory cytokines induce nitric oxide-dependent DNA damage and ultimately β-cell death. Not only does nitric oxide cause β-cell damage, it also activates a functional repair process. In this study, the mechanisms activated by nitric oxide that facilitate the repair of damaged β-cell DNA are examined. JNK plays a central regulatory role because inhibition of this kinase attenuates the repair of nitric oxide-induced DNA damage. p53 is a logical target of JNK-dependent DNA repair; however, nitric oxide does not stimulate p53 activation or accumulation in β-cells. Further, knockdown of basal p53 levels does not affect DNA repair. In contrast, expression of growth arrest and DNA damage (GADD) 45α, a DNA repair gene that can be regulated by p53-dependent and p53-independent pathways, is stimulated by nitric oxide in a JNK-dependent manner, and knockdown of GADD45α expression attenuates the repair of nitric oxide-induced β-cell DNA damage. These findings show that β-cells have the ability to repair nitric oxide-damaged DNA and that JNK and GADD45α mediate the p53-independent repair of this DNA damage. Insulin-dependent diabetes mellitus is an autoimmune disease characterized by the selective destruction of insulin-secreting pancreatic ␤-cells found in the islets of Langerhans (1). Cytokines, released from invading leukocytes during insulitis, are believed to participate in the initial destruction of ␤-cells, precipitating the autoimmune response (2,3). Treatment of rat islets with the macrophage-derived cytokine interleukin-1 (IL-1) 2 results in the inhibition of glucose-stimulated insulin secretion and oxidative metabolism and in the induction of DNA damage that ultimately results in ␤-cell death (4 -6). Nitric oxide, produced in micromolar levels following enhanced expression of the inducible nitric-oxide synthase in ␤-cells, mediates the damaging actions of cytokines on ␤-cell function (7)(8)(9). Nitric oxide inhibits insulin secretion by attenuating the oxidation of glucose to CO 2 , reducing cellular levels of ATP and, thereby, attenuating ATP-inhibited K ϩ channel activity (10,11). The net effect is the inhibition of ␤-cell depolarization, calcium entry, and calcium-dependent exocytosis. In addition to the inhibition of ␤-cell function, nitric oxide induces DNA damage in ␤-cells (4,12,13). Nitric oxide or the oxidation products N 2 O 3 and ONOO Ϫ induce DNA damage through direct strand breaks and base modification (14 -16) and by inhibition of DNA repair enzymes, thereby enhancing the damaging actions of nitric oxide (17,18). Recent studies have shown that ␤-cells maintain a limited ability to recover from cytokine-mediated damage (19,20). The addition of a nitric-oxide synthase inhibitor to islets treated for 24 h with cytokine and continued culture with the nitric-oxide synthase inhibitor and cytokine results in a time-dependent restoration of insulin secretion, mitochondrial aconitase activity, and the repair of nitric oxide-damaged DNA (20,21). Nitric oxide plays a dual role in modifying ␤-cell responses to cytokines. Nitric oxide induces ␤-cell damage and also activates a JNK-dependent recovery response that requires new gene expression (22). The ability of ␤-cells to recover from cytokinemediated damage is temporally limited because cytokine-induced ␤-cell damage becomes irreversible following a 36-h incubation, and islets at this point are committed to degeneration (19). The purpose of this study was to determine the mechanisms by which ␤-cells repair nitric oxide-damaged DNA. Previous reports have shown that DNA damage induced by oxidizing agents, such as nitric oxide, is repaired through the base excision repair pathway (23), but how this pathway is activated in response to nitric oxide is unknown. Similar to the recovery of metabolic function, we now show that the activation of JNK by nitric oxide is required for repair of cytokine-induced DNA damage in ␤-cells. p53 is a logical candidate to mediate this repair because it plays a central role in DNA repair, is a target of JNK, and is activated by nitric oxide (24 -27). However, we show that cytokines do not stimulate p53 phosphorylation, and nitric oxide fails to stimulate p53 accumulation and phosphorylation. Growth arrest and DNA damage (GADD) 45␣ is a DNA damage-inducible gene that can be regulated by both p53dependent and p53-independent mechanisms (28 -31). In contrast to p53, we show that cytokines stimulate GADD45␣ expression in a nitric oxide-and JNK-dependent manner and that siRNA-mediated knockdown of GADD45␣ results in an attenuation in the repair of nitric oxide-mediated DNA damage. These findings support a role for JNK in the regulation of GADD45␣-dependent and p53-independent repair of nitric oxide-damaged ␤-cell DNA. Islet Isolation and Cell Culture-Islets were isolated from male Sprague-Dawley rats (250 -300 g) by collagenase digestion as described previously (32). Islets were cultured overnight in CMRL-1066 (containing 2 mM L-glutamine, 10% heat-inactivated fetal calf serum, 100 units/ml penicillin, and 100 g/ml streptomycin) at 37°C under an atmosphere of 95% air and 5% CO 2 before experimentation. INS 832/13 cells were removed from growth flasks by treatment with 0.05% trypsin and 0.02% EDTA for 5 min at 37°C, washed twice with RPMI 1640 medium, and plated at the indicated cell densities. Comet Assay-DNA damage was assessed using the comet assay (single-cell gel electrophoresis) as described previously (4,33). Briefly, cells were harvested and embedded in 0.6% low melting agarose on slides precoated with 1.0% agar. Samples were then incubated in lysing solution (2.5 M NaCl, 100 mM EDTA, 10 mM Trizma (Tris base), 1% Triton X-100) overnight. Following lysis, the slides were incubated in an alkaline electrophoresis buffer (0.3 M NaOH, 1 mM EDTA (pH Ͼ 13)) for 40 min followed by electrophoresis at 25 volts/300 mA for 20 min. Slides were washed three times in 0.4 M Tris (pH 7.5) and stained with ethidium bromide (2 g/ml). Comet images were captured using a Nikon eclipse 90I, and the CASP program (34) was used to quantify the mean tail moment from 30 to 50 cells/condition. Nitrite Determination-Nitrite production was determined from culture supernatants using the Greiss assay as described previously (37). Fifty microliters of the Greiss reagents were incubated with 50 l of the culture supernatants, and the absorbance was measured at 540 nm using a Power Wave X-340 plate reader (Biotek Instruments). Nitrite concentrations were calculated using a sodium nitrite standard curve. siRNA Transfection-siRNA transfection was performed using NeoFX transfection reagent (Ambion) according to the manufacturer's instructions. NeoFX transfection reagent and siRNA were diluted to give a final concentration of 30 nM siRNA. The diluted NeoFX-siRNA complex was added to each well and overlaid with 200,000 cells in 450 l. Experiments were started 48 h following transfection. The following Silencer Select predesigned siRNAs were obtained from Ambion: p53 sense, 5Ј-CAAUUUCCCUCAAUAAGCUtt-3Ј and GADD45␣ sense, 5Ј-CGUGCUUUCUGUUGCGAGAtt-3Ј. Real Time PCR-RNA was isolated using the RNeasy kit (Qiagen). cDNA synthesis was performed using oligo(dT) and the reverse transcriptase Superscript preamplification system according to the manufacturer's instructions (Invitrogen). Real time PCR was performed using the Light Cycler 280 (Roche Applied Biosciences) to detect SYBR Green incorporation, according to the manufacturer's instructions. The fold increase of GADD45␣ mRNA accumulation was normalized to the housekeeping gene GAPDH. Primer sequences for GADD45␣ were: forward, 5Ј-TGGCT-GCGGATGAAGATGAC-3Ј and reverse, 5Ј-GTGGGGAG-TGACTGCTTGAGTAAC-3Ј. Sequences for GAPDH primers were: forward, 5Ј-GCTGGGGCTCACCTGAAGGG-3Ј and reverse, 5Ј-GGATGACCTTGCCCACAGCC-3Ј. JNK Activation Is Required for the Repair of Nitric Oxideinduced DNA Damage-␤-Cells have the ability to repair nitric oxide-mediated DNA damage (21). Using the comet assay to measure DNA damage, we show that 1-h treatment with the nitric oxide donor DEANO results in DNA comet formation (Fig. 1A) and a 5-fold increase in the mean tail moment (Fig. 1B). Removal of DEANO by washing and continued culture for 5 h results in the repair of DNA damage as evidenced by the absence of a comet tail (Fig. 1A) and the return of the mean tail moment to control levels (Fig. 1B). Because JNK plays an essential role in the recovery of metabolic function in cytokinetreated islets (22), the effects of two chemically distinct inhibitors of JNK (the peptide inhibitor TAT-TI-JIP 153-163 and the pharmacological inhibitor SP600125) on DNA repair were examined. Treatment of INS 832/13 cells with TAT-TI-JIP 153-163 or SP600125 during the DEANO treatment and the recovery period results in the attenuation of ␤-cell DNA repair (Fig. 1A for comet and Fig. 1, B and C, for quantification of comets). As a control for JNK inhibition, TAT-TI-JIP 153-163 inhibits IL-1induced ATF-2 phosphorylation (JNK substrate) at a concentration that attenuates DNA repair (Fig. 1D). In addition, SP600125 attenuates IL-1-induced c-Jun phosphorylation in a concentration-related fashion that is similar to the concentration-related inhibition of DNA repair in INS 832/13 cells (Fig. 1D). Consistent with our previous findings related to the recovery of metabolic function (22), these findings support a role for JNK in the repair of nitric oxide-damaged ␤-cell DNA. Nitric Oxide Fails to Activate p53 in INS 832/13 Cells or Rat Islets-p53 is a transcription factor that participates in DNA repair through direct interaction with DNA repair enzymes, such as polymerase ␤ (24,38), and through the enhanced expression of genes required for DNA repair, such as GADD45␣ (39). p53 is regulated by phosphorylation, resulting in its stabilization and activation (40). p53 is a candidate to mediate the repair of nitric oxide-damaged DNA because nitric oxide has been shown to activate p53 and JNK directly phosphorylates p53 (26,27,41). However, p53 is not activated by nitric oxide in ␤-cells. IL-1 fails to stimulate p53 phosphorylation in both INS 832/13 cells ( Fig. 2A) and rat islets (Fig. 2D). Although IL-1 fails to induce p53 phosphorylation, it does stimulate the accumulation of p53 to low levels in INS 832/13 cells ( Fig. 2A), whereas p53 levels remain below the limits of detection in rat islets (Fig. 2D). The level to which p53 accumulates in response to IL-1 in INS 832/13 cells is much lower than the levels observed in response to apoptotic stimuli, such as camptothecin ( Fig. 2A). Further, IL-1-induced p53 expression does not depend on nitric oxide. The nitric-oxide synthase inhibitor, NMMA, which attenuates nitric oxide production (Fig. 2, B and E), does not modulate IL-1-induced p53 accumulation ( Fig. 2A), and DEANO fails to stimulate p53 phosphorylation or accumulation in INS 832/13 cells (Fig. 2C) or rat islets (Fig. 2D). Camptothecin, which has been shown to activate p53 (42), stimulates p53 phosphorylation and accumulation in INS 832/13 cells (Fig. 2, A and C). These findings indicate that p53 is not activated by nitric oxide in ␤-cells. p53 Is Not Required for Repair of Nitric Oxide-induced DNA Damage-The lack of p53 expression or activation in response to nitric oxide in ␤-cells suggests that DNA Following the treatments, culture supernatants were harvested, and nitrite production was determined (B and E). Cells were harvested, and phopsho-p53 and total p53 were determined by Western blot analysis. Camptothecin was used as a positive control for p53 activation, and GAPDH was used as a loading control. Results are representative of at least three experiments. repair in ␤-cells (which is activated by nitric oxide) does not require p53. To examine this issue directly, DNA repair was examined in cells in which p53 expression was attenuated using siRNA. INS 832/13 cells transfected with scramble siRNA or p53 siRNA were treated with IL-1 for 24 h or treated with IL-1 for 24 h followed by the addition of the nitric-oxide synthase inhibitor, NMMA, for an additional 12-h incubation to stimulate DNA repair. IL-1 induces DNA damage in both scramble or p53 siRNA-transfected cells as indicated by the 3-fold increase in the mean tail moment (Fig. 3A). The addition of NMMA followed by 12 additional h of incubation (in the presence of IL-1) results in the complete repair of nitric oxide-induced DNA damage in both the scramble and p53 siRNA-transfected cells. As shown by Western blot analysis, p53 accumulation in response to IL-1 treatment is attenuated in cells transfected with p53 siRNA (Fig. 3B). Scramble siRNA has no effect on IL-1-induced p53 expression (data not shown). These findings indicate that p53 is not required for the repair of nitric oxide-damaged DNA in ␤-cells because nitric oxide fails to stimulate p53 activation, and siRNA knockdown of p53 does not modify DNA repair in IL-1-treated ␤-cells. Nitric Oxide Induces GADD45␣ Expression in INS 832/13 Cells and Rat Islets-While we examined the potential role of p53 in mediating DNA repair in ␤-cells, we also examined the expression of GADD45␣, a DNA damage-inducible gene that can be regulated by p53-dependent and p53-independent pathways (29 -31). IL-1 induces GADD45␣ mRNA accumulation in a time-dependent fashion that is maximal following 12-h incubation (Fig. 4, A and B). The expression of GADD45␣ is nitric oxide-dependent because the nitric-oxide synthase inhibitor, NMMA, which attenuates nitric oxide production (Fig. 4C), inhibits IL-1-induced GADD45␣ mRNA accumulation (Fig. 4B). The nitric oxide donor DEANO also induces GADD45␣ mRNA accumulation in INS 832/13 cells (Fig. 4D) and rat islets (Fig. 4E). Similarly, treatment of human islets for 3 h with DEANO results in a Ͼ2-fold increase in GADD45␣ expression as measured by Affymetrix gene chip analysis (data not shown). OCTOBER 2, 2009 • VOLUME 284 • NUMBER 40 JOURNAL OF BIOLOGICAL CHEMISTRY 27405 is repaired completely following removal of nitric oxide by washing and continued culture for 5 h in the absence of the nitric oxide donor. Importantly, DEANO-induced DNA damage following a 1-h incubation is enhanced by ϳ2-fold in INS 832/13 cells transfected with GADD45␣ siRNA, and the ability of these cells to repair damaged DNA during the 5-h recovery period is reduced by Ͼ50% compared with scramble siRNA-transfected control cells (Fig. 6A). The induction of GADD45␣ mRNA expression by DEANO is attenuated with GADD45␣ siRNA and not affected by scramble siRNA (Fig. 6B). These results indicate that the induction of GADD45␣ by nitric oxide participates in the repair of nitric oxide-induced DNA damage. DISCUSSION It has been shown that nitric oxide, produced in micromolar levels by the ␤-cell in response to cytokines, mediates the damaging effects of cytokines through the inhibition of mitochondrial enzymes, inhibition of glucose-stimulated insulin secretion, and the induction of DNA strand breaks and base modifications (8,9). We have established that ␤-cells possess an active defense or repair process that affords these cells a limited capacity to recover from these damaging effects of nitric oxide (20). Although nitric oxide activates multiple MAP kinases in ␤-cells, JNK activation is required for the recovery of oxidative metabolism following nitric oxide-mediated damage (22). In this report, the mechanism underlying the induction of DNA repair in response to nitric oxide-mediated damage in ␤-cells was evaluated. A number of mechanisms have been proposed to explain the repair of oxidative DNA damage induced by free radicals (23). Damage induced by the oxidized products of nitric oxide, N 2 O 3 or ONOO Ϫ , results primarily in the deamination and oxidation of bases (14,16,43). These altered bases are repaired by base excision repair in which specific glycosylases first recognize and remove the damaged base. An apurinic/apyridimic endonuclease then recognizes the abasic site and cleaves the phosphate-sugar backbone. The resulting gap is filled by polymerase ␤ and annealed by DNA ligase III (23). Although these mechanisms explain the repair of oxidized DNA damage, the mechanisms controlling how cells respond to nitric oxide to activate or enhance DNA repair pathways are not well understood. Most studies thus far have focused on the damaging actions of nitric oxide on DNA and fail to examine the protective mechanisms activated in cells that are designed to repair the damage. In this report we identify a role for JNK in the repair of nitric oxide-induced DNA damage in ␤-cells. Using chemical and peptide inhibitors, we demonstrate that JNK is required for the repair of nitric oxide-induced DNA damage. This result may seem counterintuitive because JNK is generally considered a proapoptotic MAP kinase; however, recent studies by Andreka et al. (44) have shown that JNK activation is protective against nitric oxide-induced cardiac myocyte death. Additionally, Mercola and co-workers (45)(46)(47)(48) have described a role for JNK in DNA repair from genotoxic stresses. Although this report presents evidence of a protective role for JNK activation in nitric oxide-induced ␤-cell damage, others have suggested that JNK activation participates in cytokine-induced ␤-cell apoptosis (49). We do not observe apoptosis in response to DEANO or IL-1 under the culture conditions used in this study (13; and data not shown). Ammendrup et al. (49) have shown that prolonged incubations with IL-1 result in ␤-cell apoptosis and that the inhibition of JNK protects against cytokine-mediated apoptosis. These seemingly opposing results could be explained by the differences in cytokine exposures. In support of this hypothesis, we have recently shown that short exposures to cytokine stimulate ␤-cell necrosis, but prolonged incubations with IL-1 for Ͼ36 h result in a shift in the mechanism of death from necrosis to apoptosis. 3 This shift in the type of cell death from necrosis to apoptosis is associated with irreversible DNA damage. Similar to the temporal effects of cytokines on ␤-cell viability, recent studies suggest that the temporal activation of JNK regulates the cellular responses to cytokines such as tumor necrosis factor ␣-induced damage (50). Tumor necrosis factor ␣-induced JNK activation has been shown to be biphasic, where early transient activation of JNK is protective, while prolonged activation induces apoptosis (50). This mechanism of JNK regulation may apply to nitric oxide-induced JNK activation where transient early activation stimulates protective responses, whereas sustained JNK activation contributes to cytokine-induced apoptosis. To investigate further how JNK regulates DNA repair, we examined the activation of p53 by nitric oxide in ␤-cells. p53 is a transcription factor shown to be involved directly in base excision repair through the interaction with polymerase ␤ and indirectly through the transcriptional regulation of factors involved in DNA repair, such as GADD45␣ (28,38). We originally hypothesized that JNK would induce DNA repair through the activation of p53 because JNK has been shown to activate p53 directly and nitric oxide activates p53 in other cell types (25)(26)(27)41). However, nitric oxide, produced endogenously in response to IL-1 or exogenous using DEANO, fails to induce p53 phosphorylation or accumulation in ␤-cells. IL-1 does enhance p53 accumulation to low levels in INS 832/13 cells, but this induction does not regulate DNA repair because INS 832/13 cells with attenuated p53 levels maintain the ability to repair DNA damage. Although p53 is not induced by nitric oxide in ␤-cells, the expression of GADD45␣, a target of p53 that can be regulated by p53-independent mechanisms, is enhanced by nitric oxide. GADD45␣ is an inducible gene activated by various DNA-damaging agents that participates in base excision repair by interacting directly with DNA repair enzymes, including proliferating cell nuclear antigen, an auxiliary factor for polymerase ␤, and apurinic/apyridimic endonuclease (28,31,39). GADD45␣ maintains nuclear localization of these enzymes, thereby enhancing DNA repair (28). In addition to p53, GADD45␣ expression is regulated by various transcription factors, including FoxO1, Oct-1, and NF-YA (29,30,39,51). The accumulation of GADD45␣ mRNA in response to nitric oxide depends on JNK activation. These findings correlate JNK-and nitric oxide-dependent GADD45␣ expression with JNK-dependent DNA repair. Using siRNA gene knockdown, the role of GADD45␣ in the repair of nitric oxide-damaged DNA in ␤-cells was examined. Importantly, the siRNA knockdown of nitric oxide-induced GADD45␣ expression results not only in an attenuation in the repair of damaged DNA, it also makes ␤-cells more susceptible to nitric oxide-mediated DNA damage (Fig. 6A). Knockdown of GADD45␣ does not completely prevent the repair of damaged DNA. This result may be due to an incomplete knockdown of GADD45␣, or it may be a consequence of the activation of additional pathways that participate in the repair the damaged DNA. Although the inhibition of DNA repair is not complete, our findings support a role for GADD45␣ in maintaining and restoring DNA integrity under conditions of stress induced by nitric oxide in ␤-cells. These findings describe a repair mechanism activated by ␤-cells in response to nitric oxide-induced DNA damage. We show that JNK activation by nitric oxide plays a central role in the repair process through the p53-independent regulation of GADD45␣. Currently, the mechanism by which JNK stimulates GADD45␣ expression remains unknown; however, we speculate that ATF-2 activation may play a central role. ATF-2, a JNK substrate that we show to be phosphorylated in response to cytokines (Fig. 1D), has been shown to contribute to the transcriptional regulation of GADD45␣ expression and to participate in JNK-dependent DNA repair (46,52).
4,690
2009-08-02T00:00:00.000
[ "Biology" ]
Application of ejectors for two-phase flows . The article studies the work of a water jet pump (ejector), influence of the mixture characteristics and pipe length on the energy parameters of the ejector system. The relevance of the topic is associated with the active use of ejectors in the modern world in various areas, including in hydraulic structures. Frequent use of ejectors is ensured by high reliability, simplicity of design and maintenance. The purpose of this work is to study the operation of the ejector, the energy characteristics of the flow and the influence of the characteristics of the transported two-phase flow. The calculation of the real practical problem of selecting an ejector for lifting and transporting solid particles is given. The values of the density of the transported mixture and the length of the transport pipes are selected as variable indicators. The analysis of the calculation shows that taking into account the energy characteristics of the flow when selecting the components of the ejector system is very important, and the density of the transported mixture affects the efficiency of the system. Introduction We can often detect the silting of the bottom during the operation of hydrotechnical constructions, in reservoirs and other water structures with stagnant or slow-flowing water. Siltation is a concentration of bed load caused by solid particles of clastic material, which is dominated by silt, sand or clay particles. This effect has a negative impact on exploitation of hydrotechnical constructions [1,2]. It is a problem. There are several ways to solve the problem: 1. Make a salvo release of water. It is very important to provide hydraulic washing with a fast increase in discharge determined only by the opening time of the valves of the regulating hydrotechnical structure (usually 10-15 minutes). Under such conditions, the disturbance wave will spread quickly in the channel. It leads to a fast increase in the flow velocity and intensive erosion of bed load. 2. Lift the particles from the bottom with the ejector. When calculating the effectiveness of this method, it is necessary to choose the velocity and type of movement of the liquid (energetically favorable) [3]. A similar problem was considered by the Moscow State University of Civil Engineering (MGSU). As part of the canal reconstruction project, they carried out a study on the topic "Hydraulic efficiency of engineering measures to increase the transit discharge of the Moscow Canal". For the study, depth measurements were carried out along the overpass in the lower reservoir from the side of the supplying channel. Last year depth measurements made at the same points showed that the sediment level had changed. The passage section under the overpass has decreased compared to past measurements. Solid particles impair the operation of structures (the wave mode of the trunk channel is changing, the passage section under the overpass is reduced, the operation of the pumping equipment deteriorates and turbidity is increased). No doubt, it is essentially needed to clean the supplying channel from bed load. The work on clearing the channel is required to be systematic, because sedimentation happens all the time. MGSU (MISI) proposed one of the hydraulic ways to clean the supply channel, which does not require energy and water consumption. This method is simple in design and is based on the use of ejector. The raised sediments can be stored in the riverside zone or removed outside the structure. Application of ejectors in modern world The ejector works according to Bernoulli's equation. It creates a reduced pressure of one environment medium in the narrowing cross-section, which creates a suction into the flow of another environment medium. This mechanism is quite common in the modern world. The scope of application of ejectors is also very large: pumping out dangerous gases, ventilation of enclosed spaces, transport of coal dust, ash, cement, wood chips, grain, sugar, milk powder. Ejectors are used in cars. One of the most popular choices options for reagent-free water treatment systems is ejection. [4,5,6] The advantages of ejectors are: no moving parts, easy maintenance, high reliability and simple design, low operating cost. General information about water jet pumps (ejectors) The general installation diagram of the water jet pump is shown on Fig. 1. Through the pressure line (1) working flow with work discharge flow Q1 is brought to the mixing chamber (2) of ejector. Specified discharge flow Q2 is brought from the lower reservoir through a suction pipe (3) in mixing chamber. Combined flow through a common water pipe (4) enters the upper reservoir [7,8]. Ejector (Fig. 2) consists of a flow mixing chamber Q1 and Q2 (1), nozzle (2) which ends the pressure water pipe of the working flow at the entrance to the mixing chamber, suction pipe bend (3), which connects the suction pipe to the mixing chamber [9,10]. The mixing chamber is usually cylindrical in shape ( Fig. 2) with diameter D3 approximately equal to the sum of diameters (suction pipe and nozzle diameters). The mixing chamber is attached to the diffuser and it is connected to the water pipe to supply the receiving reservoir. The diagram of work ejector (Fig. 3) shows that when work discharge flow Q1 enters the mixing chamber through the nozzle with a high velocity, which creates a vacuum in side. With the aid of a vacuum, the water comes from the lower reservoir and enters the mixing chamber with a discharge flow Q2 [11]. (1) Kinetic energy is more than the energy of the useful flow (flow from the lower reservoir to the upper one with a discharge of useful discharge flow Q2), which is equal to (formula 2): (2) When mixing flows, the velocity of the working flow U1 decreases to U3, and the velocity of the useful flow U2 increases to the velocity of U3 (formula 3). U3 -total average velocity of the mixed flow at the nozzle. With this velocity of U3, the mixed flow enters the diffuser. Energy calculation The energy required to lift the mass of the useful discharge flow Q2 from the lower level to the height of the upper receiving reservoir consists of energy: a) to lift the specified useful discharge flow Q2 from the level of the lower reservoir to the height of the mixing chamber H1 (with resistance in the suction pipe) [12]: where U2 -estimated flow velocity in the suction pipe; -the total coefficient of resistance in this pipe. b) to overcome hydraulic resistances in the mixing chamber (formula 5): where hw -lost pressure on mixing flows; c) to raise the total flow velosity to a height of ΔH (from the level of the mixing chamber to the level of the free surface of the water in the receiving reservoir ΔH=H2-H1), and also to overcome all the hydraulic resistances along the way: where d -the coefficient of resistance of the in diffuser. So the total energy consumption is determined as The working flow should have exactly this energy at the entrance to the mixing chamber. Results Consider the basic scheme of a water jet installation (Fig. 1). Take into account the difference in the total energy reserve of the entire flow in the initial I and final II cross-sections of the mixing chamber. Obviously, the difference between these energies is the loss of energy of the entire combined flow within the chamber. This loss consists of the loss of energy to move the flow along the length of the chamber and the loss of energy to realize the process of mixing the two flows. The first type loss of energy can be ignored. It is very small compared to the mixing energy. [13,14,15] Full energy reserve in the first cross-section we define as the sum of the energy the working flow (with the discharge Q1) and the energy of the useful flow (with the discharge Q2). -For cross-section II Therefore, the loss of energy in the mixing chamber: or (because Q1+Q2=Q ): Note that only the working flow Q1 consumes energy, and the useful flow Q2 even increases it. So energy consumption can be written as: where hw1-specific lost pressure of the working flow If formula (11a) is substituted into formula (11), then or, dividing by 1 , we determine the value of the lost discharge (so the lost energy attributed to the unit weight of the working flow discharge: In formula (13), the unknowns are Q1, U1, P3 and U3. Note that hw1 is the specific energy lost by the working flow (not the entire flow), since Here all the lost energy is related to the unit weight of the working flow. If attribute this lost energy to the unit weight of the entire flow, get Obviously, ℎ 1 > ℎ Let's determine the value ℎ 1 in a different way, using the formula of change in the amount of motion, that is, the equation of momentum. The calculation scheme is shown in Fig. 4. The length of the mixing chamber is determined using the data of experimental studies on the expansion of the jet (for example, if the expansion at an angle of α°=15°), and then Let's make an equation of the change in the amount of motion for the mass of a liquid enclosed in a cylindrical mixing chamber with a diameter of 3 ≅ 2 + 2 ; (18) Fig. 4,а). Note that under steady motion conditions, the amount of mass motion in the region between sections (1'-1') and (3-3) does not change. Therefore, it can be determined the increment of the movement of the allocated mass as follows: Below is a formula for determining the sum of the impulses of external forces acting on the above-mentioned mass of liquid: The gravity (weight) and the pressure forces of the walls of the mixing chamber are normal to the axis of motion and the projection of their impulses is zero. The forces р1 and р2 are equal to (Fig. 4.b) 1 = 1 3 и 2 = 3 3 Therefore, the sum of the impulses is equal to ∑ р cos = ( 1 − 3 ) 3 . So, the second formula for ℎ 1 is written below: Using this formula, determines the amount of energy consumed by the working flow in the mixing chamber: The formula (35) in a slightly different form was also obtained by Zeiner in 1873. Discussion Based on the data obtained during the inspection of the channel, will be evaluate the effects of the consistency, pressure loss and length of the ejector pipes on the working flow velocity. The calculations are made in the following order of determining the values (Fig. 5): The procedure for calculating The specific gravity γ (from 1 to 1.3 in increments of 0.2) of the mixture is used to determine the work discharge flow Q1. [16] The diameter of the nozzle exits hole d and the diameter of the mixing chamber D3 are determined by the formulas (36). The initial data for the calculation of the practical problem is presented in the tabular form ( Table 1). Table 3. The calculation uses L4 and L4' -this is the length of the total water pipe from the mixing chamber to the upper reservoir. This length allows to see how the required velocity of the working flow changes. To analyze the efficiency of the work, we use the efficiency factor. The efficiency factor is the proportion of the work required to the work expended. [17] The required work for lifting the liquid is determined by the formula (40) The real work is equal to (41): where Δz is the difference between the working flow at the entrance to the mixing chamber and the lifting height of the useful discharge flow and atmospheric pressure. It can be defined as (42): The efficiency can be determined by the formula (43): When selecting the pump required for the selected situation, use the formula (44): The results of determining the working flow velocity, efficiency, and pump power are presented in a tabular form ( Graphs are plotted to visualize the results and analyze the data obtained (Fig. 6,7) Fig. 6. The dependence of the efficiency and diameter D3 on Conclusions The effects of ejector geometries on ejector performance were studied. The ejector efficiencies were determined using the measured data. The solution of the practical problem showed that with an increase in the amount of solid particles, the efficiency of the ejector system also increases. The pressure loss affects the ejector work. The speed U1 of the working flow calculated without taking into account losses is about 8 times less than the same speed calculated taking into account energy losses. The length of the general water pipe after the mixing chamber to the upper reservoir, when it is increased from 10 m to 100 m, gives an increase in the required speed U1 by 15%.
2,993.2
2021-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Study of vector boson scattering and search for new physics in events with two same-sign leptons and two jets A study of vector boson scattering in pp collisions at a center-of-mass energy of 8 TeV is presented. The data sample corresponds to an integrated luminosity of 19.4 inverse femtobarns collected with the CMS detector. Candidate events are selected with exactly two leptons of the same charge, two jets with large rapidity separation and dijet mass, and moderate missing transverse energy. The signal region is expected to be dominated by electroweak same-sign W-boson pair production. The observation agrees with the standard model prediction. The observed significance is 2.0 standard deviations, where a significance of 3.1 standard deviations is expected based on the standard model. Cross section measurements for W+/- W+/- and WZ processes in the fiducial region are reported. Bounds on the structure of quartic vector-boson interactions are given in the framework of dimension-eight effective field theory operators, as well as limits on the production of doubly-charged Higgs bosons. Vector boson scattering (VBS) and quartic boson couplings are features of the standard model (SM) that remain largely unexplored by the LHC experiments. The observation of a Higgs boson [1][2][3], in accordance with a key prediction of the SM, motivates further study of the mechanism of electroweak symmetry breaking through measurements of VBS processes. In the absence of the SM Higgs boson, the amplitudes for these processes would increase as a function of center-of-mass energy and ultimately violate unitarity [4,5]. The Higgs boson actually observed by the LHC experiments may restore the unitarity, although some scenarios of physics beyond the SM predict enhancements for VBS through modifications to the Higgs sector or the presence of additional resonances [6,7]. This Letter presents a study of VBS in pp collisions at ffiffi ffi s p ¼ 8 TeV. The data sample corresponds to an integrated luminosity of 19.4 AE 0.5 fb −1 collected with the CMS detector [8] at the LHC in 2012. The aim of the analysis is to find evidence for the electroweak production of same-sign W-boson pair events. The strong production cross section is reduced by the same-sign requirement, making the experimental signature of same-sign dilepton events with two jets an ideal topology for VBS studies. Candidate events have exactly two identified leptons of the same charge, two jets with large rapidity separation and dijet mass, and moderate missing transverse energy. The final states considered are μ þ μ þ ν μ ν μ jj, e þ e þ ν e ν e jj, e þ μ þ ν e ν μ jj, and their charge conjugates and τ-lepton decays to electrons and muons. Figure 1 shows representative Feynman diagrams for the electroweak and QCD induced production. The study of VBS presented here leads to measurements of the production cross sections for W AE W AE and WZ in a fiducial region. Evidence for electroweak production has been reported by the ATLAS Collaboration [9]. Various extensions of the SM alter the couplings of vector bosons. An excess of events could signal the presence of anomalous quartic gauge couplings (AQGCs) [10]. Doubly charged Higgs bosons are predicted in Higgs sectors beyond the SM where weak isotriplet scalars are included [11,12]; they can be produced via weak vector-boson fusion (VBF) and decay to pairs of same-sign W bosons [13]. The central feature of the CMS apparatus is a superconducting solenoid, of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the field volume are a silicon pixel and strip tracker, a crystal electromagnetic calorimeter, and a brass or scintillator hadron calorimeter. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke of the magnet. The first level of the CMS trigger system, composed of custom hardware processors, is designed to select the most interesting events within 3 μs, using information from the calorimeters and muon detectors. The high level trigger processor farm further reduces the event rate to a few hundred hertz before data storage. Details of the CMS detector and its performance can be found elsewhere [8]. Several Monte Carlo (MC) event generators are used to simulate the signal and background processes. The leadingorder event generator MADGRAPH 5.2 [14] is used to produce event samples of diboson production via diagrams with two or fewer powers of α s and up to six electroweak vertices. This includes two categories of diagrams: those with exactly two powers of α s which we refer to as quantum chromodynamic (QCD) production and those with no powers of α s , which we refer to as electroweak (EW) production. The EW category includes diagrams with WWWW quartic interactions and diagrams where two same-sign W bosons scatter through the exchange of a Higgs boson, a Z boson, or a photon. Double-parton scattering, triboson production, and doubly charged Higgs boson production samples are also generated using MADGRAPH 5.2. Top-quark background processes are generated with the next-to-leading-order event generator POWHEG 1.0 [15][16][17][18]. The set of parton distribution functions (PDFs) used is CTEQ6L [19] for MADGRAPH and CT10 [20] for POWHEG. All event generators are interfaced to PYTHIA 6.4 [21] for the showering of the partons and subsequent hadronization. The PYTHIA parameters for the underlying event were set according to the Z2 Ã tune [22]. The detector response is simulated by the GEANT4 package [23] using a detailed description of the CMS detector. The average number of simultaneous proton-proton interactions per bunch crossing in the 8 TeV data is approximately 21; additional pp interactions overlapping with the event of interest are included in the simulated samples. Collision events are selected by the trigger system requiring the presence of one or two high transverse momentum (p T ) muons or electrons. The trigger efficiency is greater than 99% for events that pass all other selection criteria explained below. A particle-flow algorithm [24,25] is used to reconstruct all observable particles in the event. It combines all the subdetector information to reconstruct individual particles and identify them as charged hadrons, neutral hadrons, photons, and leptons. The missing transverse energy E miss T is defined as the magnitude of the negative vector sum of the transverse momenta of all reconstructed particles (charged and neutral) in the event. The selection of events aims to single out same-sign dilepton events with the VBS topology while reducing the top quark, Drell-Yan, and WZ background processes. All objects are selected following the methods described in Ref. [26]. To avoid bias, the number of events passing the selection was not evaluated until the analysis was complete. Two same-sign lepton candidates, muons or electrons, with p T > 20 GeV and jηj < 2.4ð2.5Þ for muons (electrons) are required to be isolated from other reconstructed particles in a cone of Jets are reconstructed using the anti-k T clustering algorithm [27] with a distance parameter R ¼ 0.5, as implemented in the FASTJET package [28,29]. Events are required to have at least two selected jets with E T > 30 GeV and jηj < 4.7. The VBS topology is targeted by requiring that the two jets with leading p T have large dijet mass, m jj > 500 GeV, and large pseudorapidity separation, jΔη jj j > 2.5. To suppress top-quark backgrounds (tt and tW), a topquark veto technique is used; it is based on the presence of a soft muon in the event from the semileptonic decay of the bottom quark and on bottom-quark jet tagging criteria based on the impact parameters of the constituent tracks [30]. A minimum dilepton mass, m ll > 50 GeV, is required to reduce the W þ jets and top-quark background processes. To reduce the background from WZ production, events with a third, loosely identified lepton with p T > 10 GeV are rejected. Drell-Yan events can be selected if the charge of one lepton is measured incorrectly. To reduce this background, jm ll − m Z j > 15 GeV is required for e AE e AE events. The charge confusion in dimuon events is negligible. The Drell-Yan background is further reduced by requiring E miss T > 40 GeV. The nonprompt lepton background originating from leptonic decays of heavy quarks, hadrons misidentified as leptons, and electrons from photon conversions, is suppressed by the identification and isolation requirements imposed on muons and electrons. The remaining contribution from the nonprompt lepton background is estimated directly from data. The efficiency for a predefined loose leptonlike object to pass the full lepton selection, typically called the "tight-to-loose ratio" (R TL ), is estimated in a control sample with one additional lepton candidate that passes the standard lepton selection criteria. To account for the dependence on kinematic observables, this ratio is parameterized as a function of p T and η. Systematic uncertainties are obtained by the application of R TL to other control samples, accounting for the sample dependence in the estimation of R TL . The WZ → 3lν process is normalized in a data control region by requiring a third fully identified lepton with p T > 10 GeV. The contribution of opposite-sign dilepton events to the signal region is estimated by applying data-to-simulation charge misidentification scale factors to simulated events with two opposite-sign leptons. The charge-misidentification fraction is estimated using Z boson events and is found to be between 0.1% and 0.3% for electrons, while it is negligible for muons. After the full selection, about 15% of the background is due to the WZ → 3lν process and about 75% to nonprompt leptons. Backgrounds from opposite-sign lepton pairs misreconstructed as same-sign ("wrong-sign background"), WW production via double parton scattering (DPS), and triboson production (VVV), which includes top-pair plus boson processes, contribute less than 10%. The expected signal and background yields are shown in Table I for positive and negative pairs separately and for their sum. The signal corresponds to W AE W AE production, including EW and QCD contributions, and their interference, which amounts to approximately 10%. The EW processes constitute 85%-90% of the total signal contribution. The m jj and leading-lepton p T distributions for the signal and background processes are shown in Fig. 2. In order to quantify the significance of the observation of the production via VBS, a statistical analysis of the event yields is performed in eight bins: four bins in m jj with two bins in the lepton charge. The signal efficiencies are estimated using simulated samples. In the statistical analysis, shape and normalization uncertainties are considered. The shape uncertainties are estimated by remaking the distribution of a given observable after considering the systematic variations for each source of uncertainty. The lepton trigger, reconstruction, and selection efficiencies are measured using Z=γ Ã → l þ l − events that provide an unbiased sample with high purity. The estimated uncertainty is 2% per lepton. The uncertainties due to the momentum scale for electrons and muons are also taken into account and contribute 2%. The jet energy scale and resolution uncertainties give rise to an uncertainty in the yields of about 5%. The uncertainty in the event selection efficiency for events with neutrinos yielding genuine E miss T in the final state is assessed and leads to an uncertainty of 2%. The uncertainty in the estimated event yields, which is related to the top-quark veto, is evaluated by using a Z=γ Ã → l þ l − sample with at least two reconstructed jets and is found to be about 2%. The statistical uncertainty in the yield of each bin and for each process is also taken into account. The uncertainty of 2.6% in the integrated luminosity [31] is considered for all simulated processes. The normalization of the processes with misidentified leptons has a 36% systematic uncertainty [26], which has two sources: the dependence on the sample composition and the method used to estimate it. The WZ normalization uncertainty is 35%, dominated by the small number of events in the trilepton control region. Theoretical uncertainties are estimated by varying the 2 (color online). The distributions of m jj (top) and leading lepton p T , p l;max T , in the signal region (bottom). The hatched bars include statistical and systematic uncertainties. The W þ W þ and W − W − candidates are combined in these distributions. The signal, W AE W AE jj, includes EW and QCD processes and their interference. The histograms for other backgrounds include the contributions from wrong-sign events, DPS, and VVV processes. renormalization and factorization scales up and down by a factor of two from their nominal value in the event, and found to be 5% for the signal normalization and 50% for the triboson background normalization. A PDF uncertainty of 6%-8% in the normalization of the signal and WZ processes is included. The systematic uncertainties of the background normalizations are taken into account using log-normal distributions. The cross section is extracted for a fiducial signal region. The fiducial region is defined by requiring two same-sign leptons with p T l > 10 GeV and jη l j < 2.5, two jets with p T j > 20 GeV and jη j j < 5.0, m jj > 300 GeV, and jΔη jj j > 2.5 and is less stringent than the event selection for our signal region. The measured cross section is corrected for the acceptance in this region using the MADGRAPH MC generator, which is also used to estimate the theoretical cross section. The acceptance ratio between the selected signal region and the fiducial region is 36% considering generator-level jet and lepton properties only. The overall acceptance times efficiency is 7.9%. The MADGRAPH prediction of the same-sign W-boson pair cross section is corrected by a next-to-leading order to leading-order cross section ratio estimated using VBFNLO [32][33][34]. The fiducial cross section is found to be σ fid ðW AE W AE jjÞ ¼ 4.0 þ2.4 −2.0 ðstatÞ þ1.1 −1.0 ðsystÞ fb with an expectation of 5.8 AE 1.2 fb. In addition to the dilepton same-sign signal region, a WZ → 3lν control region is studied by requiring an additional lepton with p T larger than 10 GeV. This control region allows the measurement of a fiducial cross section of the WZjj process and is σ fid ðWZjjÞ ¼ 10.8 AE 4.0ðstatÞ AE 1.3ðsystÞ fb with an expectation of 14.4 AE 4.0 fb. The fiducial region is defined in the same way as for the WW analysis, but requiring one more lepton with p T l > 10 GeV and jη l j < 2.5. The acceptance ratio between the selected signal region and the fiducial region is 20% considering generator-level jet and lepton properties only. The overall acceptance times efficiency is 3.6%. To compute the limits and significances, the CL s [35][36][37] construction is used. The observed (expected) significance for the W AE W AE jj process is 2.0σ (3.1σ). Considering the QCD component of the W AE W AE jj events as background and the EW component together with the EW-QCD interference as signal, the observed (expected) signal significance reduces to 1.9σ (2.9σ). Various extensions to the SM alter the couplings between vector bosons. Reference [10] proposes nine independent C-and P-conserving dimension-eight effective operators to modify the quartic couplings between the weak gauge bosons. The variable m ll is more sensitive to AQGCs than p l;max T , m lljj , and m jj . Figure 3 (top) shows the expected m ll distribution for three values of F T;0 =Λ 4 ; Λ is the scale of new physics and F T;0 is the coefficient of one of the nine effective operators. The observed and expected upper and lower limits at 95% confidence level (C.L.). on the nine coefficients are shown in Table II, where all the results are obtained by varying the effective operators one by one. The effect of possible AQGCs on the WZ process in the signal region is negligible. Some operators for anomalous quartic gauge boson couplings may lead to tree-level unitarity violation. We also report the values of the operator coefficient for which unitarity is restored at the scale of 8 TeV, the unitarity limit. In addition to the limits on individual operator coefficients, the expected and observed two-dimensional 95% C.L. on F S;0 =Λ 4 and F S;1 =Λ 4 are presented in Fig. 3 (bottom): a linear combination of those operators leads to a scaling of the SM cross section. Doubly charged Higgs bosons are predicted in models that contain a Higgs triplet field. Some of these scenarios We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: [1] ATLAS Collaboration, Phys. Lett. B 716, 1 (2012).
3,985.8
2014-10-23T00:00:00.000
[ "Physics" ]
The Relationship between Wind Pressure and Pressure Coefficients for the Definition of Wind Loads on Buildings : Wind induced pressures on buildings are the product of a velocity pressure and a pressure coefficient. The way in which these two quantities are calculated has changed over the years, and Design Codes have been modified accordingly. This paper tracks the evolution of the approach to wind loading of buildings from the practice in the 1950s, mainly referring to the Swiss Code SIA, to the most recent advances including probabilistic methods, internet databases, and advanced of meteorological phenomena. Introduction Broadly speaking, the action exerted by the wind on a body is proportional to the wind velocity pressure through an aerodynamic coefficient, accounting for the way in which the body interacts with the flow. In the ideal case in which the flow is laminar and the body is streamlined, the surface pressure can be expressed as: where q = 0.5ρv 2 is the velocity pressure, and c = c(M) is the pressure coefficient, depending on the location M where pressure is measured. This is not quite the case for Civil construction in general and for buildings in particular. Indeed, wind in the low atmosphere is characterized by a turbulent boundary layer flow, in which the mean wind speed is variable with the height above the ground, and to which a three-component turbulence is superimposed. In addition, civil constructions quite seldom meet the requirement of a streamlined shape, being instead bluff bodies. The bluff shape causes flow separation, generating additional turbulence to the oncoming one, the so-called signature turbulence, whose characteristics are related to the aerodynamics of the building and to a lesser extent to the characteristics of the oncoming wind. Finally, the mean and fluctuating properties of the wind flow cannot be defined through a deterministic approach, but rather need a probabilistic treatment. Combination of the three aspects above makes Equation (1) the general expression of a physical law, yet unable to alone give a quantitative definition of the load. The structure of Equation (1) seems to separate well that which derives from the characteristics of the flow from the effects of aerodynamics, yet this separation is not unique, and lends itself to many possible interpretations, as well as to potential misunderstandings. In fact, the meaning of the two terms appearing to the hand right side of the equation must be properly defined from both the physical and the statistical points of view. In this paper, the evolution of Equation (1) from its first use to modern applications is briefly outlined. For use of designers making their way through Codes of Practice, the meaning and use of Code equations are also explained. Early Studies on Building Aerodynamics With the aim of experimentally measuring wind loads on simple objects, in 1871 the first wind tunnel was built by F. H. Wenham. The results of his tests on flat inclined plates were then used by W. Unwin, who first attempted to evaluate wind pressures on building roof surfaces [1]. In doing this, a first major error arose, that of assuming that the load measured on an isolated element remains the same when the element becomes part of an assemblage; this is not quite true, as the pressure distribution is related to the overall geometry, and not merely of that of the detail where it is measured. The first wind tunnel for civil engineering applications was built in 1890 in Melbourne, Australia, by W. C. Kernot with the purpose of measuring wind pressures on a flat plate orthogonal to the flow. In the coming years, wind tunnels were built also in Denmark by J. O. V. Irminger (1893) and in France by A. G. Eiffel (1909), both aimed at assessing wind loads on civil structures. Aerodynamic studies began to develop rapidly, and the first heavier-than-air flight was achieved in 1903, making aerodynamics the crucial issue in the development of aeronautics. For more than 50 years civil and aeronautical aerodynamics, though differing from each other, were investigated in the same experimental facilities as it had not yet been recognized that the flow encountered by aircraft flying at hundreds or thousands of meters of height is quite different from that hitting ground-based Civil constructions. This misunderstanding is at the base of perhaps the major mistake made in earlier times when evaluating wind loads on Civil structures. In the early 1900s, the need for specific studies on the pressures exerted by the wind on buildings began to arise. Until the 1950s, the pressure distribution on plates with different shapes, dimensions and pitch angles were investigated in wind tunnels, and the results were used to evaluate the loading of the upwind surfaces of building. Only later, the important role of suction on the leeward surfaces when evaluating the overall forces due to wind was acknowledged [2][3][4]. Irminger [2] first carried out several experiments on rectangular model buildings with sloped roofs, showing the pressure pattern along the middle section of the tested models. Then, Irminger and Nøkkentved [5,6] used flow visualization to show that (i) the upwind face was subject to (over)pressure; that (ii) the leeward and side faces, as well as the downwind roof slope were subject to negative pressure (or suction); and that (iii) the upwind slope was exposed to either positive or negative pressures depending on its inclination. Moreover, the dependency of the pressure distribution on the ratio between width, depth, and height of the building was also highlighted. In their experiments, Irminger and Nøkkentved [5,6] and Nøkkentved [7] acknowledged the role of the wind tunnel floor roughness in influencing the wind speed profile and thus affecting the distribution and the intensity of wind pressures on model buildings. In the meantime, starting from 1928 the first regulations on building design due to wind loading were introduced in Europe. More or less at the same time, on the US the American Society of Civil Engineering (ASCE) started working at recommendation for 'Wind Bracing in Steel Buildings', incorporating all the available data and studies [8]. The values of the pressure coefficients were based on a collection of measurements made in wind tunnels with smooth flow conditions, and on models often detached from the tunnel floor; therefore, despite the detailed description of the pressure pattern they provide, such measurement are now known to be useless as wrong. The first attempt to compare wind tunnel measurements with full-scale data was made by Bailey [9], showing quantitative differences between the results coming from the two approaches. Based on the work of Nøkkentved [7], Bailey and Vincent [10] made one of the first experiments in a boundary layer wind tunnel, simulating the flow in the low atmosphere, finding good agreement with full-scale measurements. The turning point in the assessment of wind pressures on buildings was the work of Jensen in 1950s. Continuing the work of Nøkkentved, Jensen [11] clarified the role of ground roughness in generating wind turbulence, and first pointed out the need and set the rules for properly scaling the atmospheric boundary layer in the wind tunnel. Jensen's model law states that the ratio h/z o (also known as Jensen Number, Je) between the building height, h, and the roughness length, z o , should be the same in wind tunnel as it is in full scale. Despite the work of Jensen, it took many years before that tests in boundary layer wind tunnel became commonly available, and in 1956 one of the first wind Codes (SIA 160) was published in Switzerland, still incorporating pressure and force coefficients measured in smooth flow [12]. The Modern Wind Engineering Approach Besides the error in modelling the flow in the wind tunnel, until the 1960s the assessment of wind loads was based on the wrong hypothesis of steady wind resulting in steady pressures. This is now known not to be true, especially even in the case of bluff geometries causing flow separation, therefore a fluctuating separated shear layer and a turbulent wake. These aerodynamic features produce surface pressure fluctuations on the body, even when the flow in which this is immersed is laminar; broadly speaking, the wind velocity fluctuations induced by separation are referred to as signature turbulence. Before the spread of the use of Extreme Value statistics, Equation (1) was meant as the combination of the largest value of the velocity pressure at the site (often coinciding with the largest value ever measured, clearly depending on the measuring technique, on the length of the observation window, as well as on the inherent randomness of the quantity) and an average value of the pressure coefficient. This, of course, led to neglecting both the oncoming and signature turbulence. In the 1960s, the various aspects of the wind loading of structures were integrated together into a comprehensive theory by Davenport [13], setting the stage for the Alan G. Davenport Wind Loading Chain [14,15], and starting the era of modern Wind Engineering. In this process played their role the use of boundary layer wind tunnels and the establishment of a series of International Conferences on Wind Effects on Buildings and Structures (now International Conference on Wind Engineering, ICWE) allowing the exchange of ideas and research results. Davenport [16] first observed the need for a statistical approach to wind loading, introducing the concept of "basic design wind speed", defined as an "extreme value statistics of the wind speed averaged over a minute". In this definition, two notions were introduced: (1) the need for defining an averaging period for wind speeds, and nor rather considering instantaneous values; the latter, in fact, are not only affected by the measuring technique, but do not necessarily produce extreme effects on the structure, if their duration is too short; (2) the need for an Extreme Value (EV) analysis to evaluate the return wind speed, i.e., a fractile of the yearly maxima associated with a specified probability of exceedance. The choice of one minute as averaging period was justified by the wrong assumption that the average size of turbulent eddies was between 1.2 and 1.8 km, corresponding to a time scale of 60 s when the wind speed is 20 m/s and 30 m/s, respectively; therefore averaging over a minute would have cancelled the turbulent fluctuations out. This is now known not to be true, as values in the order of 50 to 300 m apply to the turbulence scale during synoptic storms. However, more than that, what was lacking in the first work of Davenport was an appropriate treatment of the effects of turbulence on the wind loads. Later, Davenport [13] better recognized the structure of the atmospheric turbulence and its impact on the wind loading. He proposed that the instantaneous wind speed is represented as the sum of a mean wind speed U averaged over a longer period, T, and a zero-mean turbulent component u (t), averaged over a shorter period, τ: In so doing, the peak wind speed is represented as the product of the mean wind speed and a gust factor G u :V where I u = σ u /U is the turbulence intensity, and g u is the velocity peak factor, indicating the average number of standard deviations σ u the peak wind speed exceeds the mean value. The peak factor was found to be a function of the averaging period and of the average rate ν u at which the instantaneous speed up-crosses the mean value; when the turbulent fluctuations can be approximated by a Gaussian process: Based on the available measurements of the spectrum of the atmospheric turbulence, it is found that g u ranges between 2.8 and 2.9. This procedure implies the choice of the averaging period and of the duration of the gust. Based on the Van der Hoven spectrum of the horizontal wind speed [17], Davenport [14] found it appropriate to use an averaging period of 1 h, accounting for the macro-meteorological fluctuations, and a duration of the gust of 3 s, accounting for micrometeorological fluctuations. In doing so, the mean wind speed was to be meant as the driving statistical quantity, to be evaluated by applying EV analysis to site-specific meteorological data, and the gust factor was to incorporate all the effects coming from ground surface roughness. Applying the quasi-steady theory, i.e., assuming that the instantaneous value of the surface pressure in turbulent flow coincides with what it would be if the flow were laminar and the wind speed equal to the instantaneous turbulent speed, the instantaneous surface pressure at a point on the building is given by: wherec p is to be meant as the mean value of the measured pressure coefficient. Upon linearization of Equation (5), one obtains: and the corresponding peak surface pressureŵ is: wherew lin = 0.5ρU 2 ·c p and where G w,lin is a linearized gust loading factor, transforming the mean load into a peak load. The linearization in Equation (6) is based on the assumption of small turbulence, i.e., that u (t) U, and brings two major simplifications: first, the mean surface pressure coincides with the surface pressure associated with the mean wind speed; second, the fluctuating surface pressure is proportional to the turbulent component, w lin (t) ∝ u (t) andw lin = 2I uwlin . This latter assumption allows expressing the spectrum of the surface pressure fluctuations directly from the spectrum of the atmospheric turbulence: S w ∝ S u . In case of high turbulence, linearization cannot be considered acceptable any more, and this makes all further steps much more complicated. From Equation (5), the mean surface pressure turns out to be: indicating that the bias in the mean surface pressure arising from linearization isw/w lin = 1 + I 2 u . The RMS of the surface pressure is: In so doing, the bias in the variance of surface pressure arising from linearization isw 2 /w 2 lin = 1 + 0.5I 2 u ; it can be very accurately defined asw/w lin ∼ = 1 + 0.25I 2 u . Since the bias is a systematic error, then it can be used as a correction factor for Equation (7), accounting for the linearization of both mean value and variance of surface pressure. Using the same format as Equation (3), the peak surface pressure is written as: where g w is the surface pressure peak factor, and G w is the gust loading factor. The bias in the peak surface pressure can be calculated as the ratio of Equations (10) and (7), and to evaluate it one would need to know the exact value of g w . On the other hand, a linearized version of g w can be obtained by equating G w in Equation (10) and G w,lin in Equation (7): ranging between 2.8 and 3.2 when the turbulence intensity ranges between 0 and 0.4. The exact value of g w cannot be calculated in closed form; numerical analyses based on the turbulence spectrum of Eurocode 1 show that, again when the turbulence intensity ranges between 0 and 0.4 and assuming τ = 3 s, it ranges between 2.8 and 3.9. A value of 3.5 is adopted by Eurocode 1. Figure 1 contains a sketch of the long term spectrum of surface pressure at a point, as derived from the velocity spectrum of Van der Hoven. In addition to the pressure fluctuations associated with the macro-meteorological and turbulent fluctuations of the wind speed, it contains also the fluctuations deriving from signature turbulence. The approach of Davenport, based on a mean pressure coefficient and a gust factor, in fact incorporates only the effects of the oncoming turbulence, but not those of signature turbulence. The magnitude of the latter term depends on the aerodynamic features and on the point at which the pressure is measured. For streamlined structures and for points on the windward faces the effect of signature turbulence is low to negligible; for bluff structures and for points in the separated flow region the effects of signature turbulence can be high. Enhanced Probabilistic Approach The approach of Davenport was first validated by measurements performed by Dalgliesh [18] on a 45-story office building, showing that at all points on the windward side of the structure the PDF of (positive) pressures agreed with the Gaussian assumption. However, at the only measurement point on the leeward surface the PDF of (negative) pressures departed from Gaussianity. This aspect was then deeply investigated by Peterka and Cermak [19], who measured the pressure at hundreds of points on the four vertical walls and on the roof of a tall building wind tunnel model. They found that pressures can be grouped in two categories: 1. on the windward surface, the surface pressure is positive and its PDF is close to being Gaussian; this behaviour is more generally found at points wherec p > −0.1; 2. on the surfaces exposed to separated flow, the PDF of pressures departs from being Gaussian and the left tail tends to an exponential form; this happens whenc p < −0.25. On the other hand, according to studies on low-rise buildings [20,21], Holmes [22] showed that even on the windward walls pressures can be non-Gaussian, this effect being more evident for large values of the turbulence intensity of the oncoming flow. From a practical point of view, this translates into peak factors larger than those evaluated by Equation (4), reaching values potentially as large as 10. It was then clear that turbulent fluctuations do not immediately translate into pressure fluctuations, at least at points where the flow is separated, and this suggested a revision of the quasi steady approach. Based on the observation that it is impossible to separate the components of the pressure fluctuation deriving from the oncoming turbulence from those deriving from signature turbulence, an alternative to the Davenport's approach is that of combining the mean velocity pressure with a tail statistics of the pressure coefficient. A first attempt in this direction was that of Lawson [23], who proposed to determine the design value of pressure coefficients as those corresponding to the 5 × 10 −4 fractile of the parent distribution. The chosen probability level corresponds to the largest gust in one hour having a duration of 1.8 s, as suggested by Eaton and Mayne [24]. Once it is recognized that the pressure coefficient is to be calibrated as a value corresponding a low probability of exceedance, then the question arises of whether it is more appropriate to consider the parent population or to apply Extreme Value (EV) analysis. For the latter option, it is observed that the domain of attraction of parents with an exponential tail is the Type I EV distribution or Gumbel distribution [25]. In their pioneering work, Cook and Mayne [26] tried to find a statistical model from which to derive the pressure coefficients, as alternative to the use nominal values (corresponding to mean values) as reported in the UK Code of Practice for wind load [27]. They proposed a design approach in which the definition of both wind speed (or velocity pressure) and pressure coefficients is based on the Spectral Gap [17]: • The design value of wind speed is obtained as an EV statistics of the wind speed, U, averaged over a period T of 10 min or 1 h, including all fluctuations of the macrometeorological peak; • The design value of the pressure coefficient, c p , is the peak value within the averaging period T, including all the micro-meteorological fluctuations of the incident wind turbulence, as well as those coming from signature turbulence. As already pointed out by Davenport [14], in this case the peak pressure coefficient is also not to be confused with a maximum instantaneous value measured at a point, but it is rather a statistics the time-and space-averaged instantaneous values: The duration τ and the averaging area A shall be related to the capacity that the load has to produce an effect; for example, the load duration needed to produce damage to cladding elements is smaller than that needed to produce damage on structural elements having larger tributary areas. The averaging duration and averaging area can be related with each other, once the convective nature of the process is recognised. A common relationship between the characteristic dimension l = √ A of the averaging area and the averaging duration is provided by the TVL formula [28]: The choice either A or τ allows the use of Equations (12) and (13). Common practice is to select A as the tributary area of the structural member or cladding element under consideration. Therefore, in the approach of Cook and Mayne both mean wind speed and pressure coefficients have to be understood as statistical variables and their design values need to be assessed by EV analysis. The mean wind speed is usually evaluated with a yearly probability of exceedance equal to 0.02, corresponding to a return period R = 50 yrs; the same yearly probability of exceedance applies also to wind loads. Therefore the statistics of the pressure coefficient has to be chosen such that combined with a velocity pressure having a yearly probability of exceedance of 0.02 provides a wind load having also a yearly probability of exceedance of 0.02. Assuming a Type I EV distribution for both the annual maximum wind speed and the coefficient of pressure, Cook and Mayne [26] recommended the use of the 78% fractile of peak pressure coefficients. The Cook-Mayne coefficient was established for the UK wind climate and is used worldwide. Indeed, if a more reliable value is to be obtained, then it should be calibrated on the specific climate of the site of interest. Codification Procedures EV analysis for the evaluation of the design pressure coefficients has been accepted worldwide. An exhaustive literature review of the evolution and of the geographic differences in the evaluation of pressure coefficients was presented by Gavanski et al. [29], while a state-of-the-art of the methods to estimate the peak pressures was made by Gavanski and Cook [30]. In Europe, the method of Cook and Mayne is widely used for the evaluation of the maximum and minimum pressure coefficients. Despite this, the sources of building pressure coefficients include both, largest measured peaks [31] and 78% fractiles [32] resulting from EV analysis [33,34]. In fact, the current version of Eurocode 1 [35], proposes two sets of pressure coefficients for the assessment of (1) local pressures on cladding and roofing elements (c pe,1 , for loaded areas of 1 m 2 corresponding to the largest measured peaks) and (2) wind loading on resistant structural members (c pe,10 , for loaded areas of 10 m 2 , corresponding to 78% fractiles). Current Codes and Standards incorporate the gust factor approach by using an equivalent form of Equation (7), expressing the characteristic wind load at a point M as: where q m,re f = 0.5 · ρ · v 2 m (z re f ) and v m (z re f ) are the mean velocity pressure and mean wind speed at height z re f above the ground, corresponding to a yearly probability of exceedance of 0.02, or a return period R = 50 yrs, respectively, and c p (M) is the representative value of the pressure coefficient at the point: wherep(M) is the peak relative surface pressure measured in the wind tunnel at point M, andq(z re f ) is the peak velocity pressure measured in the wind tunnel at height z re f above the tunnel floor. Therefore, the height z re f to be used in Equation (14) should be the same as that used for the normalization of the pressure coefficient in Equation (15). Unlike the gust factor approach of Davenport, the method of Cook and Mayne accounts for wind gustiness through the pressure coefficients. These incorporate the effects of both the oncoming and signature turbulence, and may strongly deviate from a Gaussian behaviour. Instead of Equation (14), the wind loading turns out to be: w(M) = q m,re f · c p,(78) (M) (16) where c p,(78) (M) indicates the 78% fractile of the pressure coefficient at point M. Comparing Equations (14) and (16) one obtains: The normalization procedure in Equation (17) gives rise to the so-called pseudo-steady pressure coefficient, highlighting the fact that it is calibrated within the steady-state method. Figure 2 sketches the parent and the EV distributions of the pressure coefficient. The figure should help understanding the probabilistic nature of the pressure coefficients and the differences between mean (c p ), 78%-fractile (c p,(78) ) and pseudo-steady (c p ) pressure coefficients. Pressure Coefficients Analysis Codes and Standards provide pressure coefficients c p for rectangular buildings with flat roofs with different corner arrangement (sharp, curved, mansard, and with parapets) as well as with pitched roofs. When the geometry of interest is not covered by the Code or Standard, it is necessary to resort to wind tunnel tests to quantify the pressure coefficients. The result of a wind tunnel test is a dataset consisting of time histories c p (M; t) of the pressure coefficients at a number of measurement points M on the model building surfaces. The instantaneous pressure coefficient is calculated as: in which p(M; t) is the absolute surface pressure measured at point M and time t, and p o is the static air pressure in the region outside the influence of the body (barometric pressure). In most cases, the reference height z re f for pressures is taken equal to the building height h, therefore q m,re f = q m,h . In some cases, the pressure coefficients are normalized with respect to a reference wind tunnel height z wt . In the latter case, preliminary to the statistical treatment of the data, the measurements must be converted to the reference height z re f : c p (M; t) = q m,wt q m,re f · c p,wt (M; t) (19) in which c p,wt (M; t) is the pressure coefficient normalized with respect to the velocity pressure q m,wt at the height z wt . The datasets provided by wind tunnel tests are sampled at frequency f s , corresponding to a sampling time 1/ f s usually smaller than the averaging period τ. Therefore the time histories of the measured pressure coefficients must first be converted into time histories of τ-averaged pressure coefficients, also corresponding to area-average as per the TVL formula. Thus, a moving-average is applied to the time series: (20) corresponding to low-pass filtering the measured time series at a frequency 1/τ. For example, Cook [32] referred to a load duration τ = 1 s and to a common design wind speed for the UK U = 22.5 m/s, which correspond to an averaging area A = 12.5 m 2 (or a characteristic dimension l = 5 m, corresponding to the diagonal of a square area). By doing this, the values given in Eurocode 1 as c pe,10 are obtained (where the subscript e stands for external, as opposed to i used for internal pressures). On the other hand, when wind tunnel time series of the point pressure coefficients are available, Equation (20) must be evaluated with: where A is the tributary area of the loaded structural element, and v m,re f is the expected value for the design mean wind speed. The concept of tributary area applies to secondary structural elements or cladding elements; therefore, it is in the order of few square meters. For main structural elements and for foundation loads, where the tributary area is much larger, besides the use of Equation (20), the reduction in the resulting loads arising from the lack of coherence of the oncoming flow is accounted for through a background factor. This issue was first addressed by Davenport [13], who introduced the notion of background factor B 2 , taking into account not only the lack of correlation of the oncoming flow turbulence, but also the vertical variation of the mean wind speed. In particular, the background factor is expressed as a function of the ratio √ A l /L u between the characteristic dimension of the loaded area A l and the turbulent length scale L u . In so doing, within the gust factor approach, the equivalent (or peak) load W lin is given by: where: is the gust loading factor depending on the loaded area, and g W,lin is the associated peak factor. It is clear that when the characteristic dimension of the structure is small compared with the dimension of the turbulent eddies, then B 2 → 1. Similar to g w,lin , a value of 3.5 is adopted by Eurocode 1 also for g W,lin . A recent summary can by found in Liu et al. [36]. For the equivalent load, only a linearized version is given; this derives from the fact that the background factor in Equation (23) is derived following a stochastic approach in the frequency domain, in fact needing a linear relationship between the wind velocity fluctuations and the surface pressure fluctuations. Finally, once the time series of pressure coefficients are normalized with respect the reference height z re f , and filtered according to the load duration τ, then EV analysis can be performed. When a Type I EV distribution is used for the extremes of the pressure coefficient, then the Gumbel scale µ M and shape β M parameters are evaluated at each measurement point M. Then, the 78% fractile of the pressure coefficient is evaluated as: Example The above procedure is applied to pressure measurements on the flat roof of a building with dimensions b = 24.40 m, d = 38.10 m, h = 12.20 m, to evaluate the pressure coefficients c p,1 for roof cladding and c p,10 for structural elements. The raw data are taken from the NIST database [37]. First, a moving-average is applied to the original time series by considering a mean wind speed of 26.5 m/s, giving τ = 0.25 s for c p,1 and τ = 0.76 s for c p,10 . Then, the Gumbel parameters µ M and β M in Equation (24) are calibrated based on the filtered time series, and the values of c p,(78) are calculated. Figures 3a-d and 4a-d show the contour plots of pressure coefficients c p,1 and c p,10 , respectively, evaluated as in Equation (17), for wind angles of incidence of 0 • , 15 • , 30 • , and 45 • . Figures 3e and 4e show the envelope of the calculated values, together with the zoning proposed by Eurocode 1. With the purpose of assessing roofing elements, the envelope of Figure 3e can be used, as what we are interested in is the maximum wind load obtained from an omnidirectional analysis. For secondary structural members with small to moderate tributary areas, up to about 10 m 2 , the c p,10 envelope of Figure 4e can still be used. However, in the case of larger tributary areas, i.e., for main structural elements or for foundation loads, the background factor needs to be considered. In this case, the pressure coefficients c p,10 provided by the loading patterns of Figures 4a-d shall be used for directional analysis, in conjunction with a background factor B(A l = b · d). In Table 1, a comparison of the area-averaged coefficients from the analysis of NIST data (CW) and Eurocode 1 values (EC1) is presented. The discrepancy between the corresponding values can be partly ascribed to the fact that the values given in Eurocode 1 apply to different ratios b:d:h; therefore, they must in some way smooth out the differences between one case and another. Future Developments According to the procedures developed through the years, the assessment of the wind load on rigid buildings requires the knowledge of both return velocity pressure and statistics of pressure coefficients. As already pointed out, pressure coefficients available in current Codes and Standards suffer from a number of deficiencies: 1. They refer to a rather narrow variety of geometries, often limited to rectangular plan buildings with constant height; for geometries that can be schematized as an assemblage of rectangular elements, empirical criteria are given to extend the use of pressure coefficients measured for rectangular buildings; 2. The statistical definition of the available pressure coefficients is not always clear, and seldom complies with Equation (24); 3. The duration of the load τ used in Equation (12), usually between 1 s and 3 s, had in some cases proven inadequate when assessing cladding loads in areas of strong negative pressures, a smaller value being more appropriate. This is the effect of high suctions being strongly non-Gaussian; therefore, featuring high peak factors; 4. The use of envelopes of pressure coefficients averaged over small areas for the assessment of main structural members and foundation load proves inaccurate; as an alternative, a more refined directional analysis using influence coefficients would be appropriate. On the other hand, velocity pressures also suffer from a number of limitations: 1. Extreme wind maps are often old, and produced with heterogeneous data and heterogeneous (and often out-of-date) statistical methods; 2. There is often non appropriate consideration of the various storm mechanisms; 3. Very seldom the measurements are continuous, therefore giving rise to an underestimation of the design wind speed as an effect of downsampling. In recent years, the development of Web and Information Technologies has led to new opportunities for more reliable procedures in the assessment of structural safety. At the beginning of the 21st century, the University of Notre Dame founded the NatHaz (Natural Hazards) Modeling Laboratory with the aim "to quantify load effects caused by various natural hazards on structures and to develop innovative strategies to mitigate and manage their effects" [38]. The NatHaz website was published, providing a collection of aerodynamic and damping datasets, online design modules for low-and high-rise buildings, and other features for buildings design to wind load. At the same time, the National Institute of Standards and Technology (NIST) developed a Database-Assisted Design (DAD) software for low-and high-rise buildings, freely available on the NIST website [39]. The software of both Institutions are based on the availability of aerodynamic databases. In this framework, NIST and the Tokyo Polytechnic University (TPU) provided data collections to create databases of pressure coefficients (aerodynamic database) and mechanical properties of buildings. The NIST database collects data measured at the Boundary Layer Wind Tunnel Laboratory (BLWTL) of the University of Western Ontario (UWO); it is the result of a joint study conducted by NIST and Texas Tech University (TTU) entitled 'Windstorm Mitigation Initiative: Wind Tunnel Experiments on Generic Low Buildings' [37]. Instead, the TPU database is part of the 21st Century Center of Excellence Program named 'Wind Effects on Buildings and Urban Environment' [40]. The characteristics of the wind tunnel tests are summarized in Table 2 for both aerodynamic databases. As discussed in Section 5.2, the data provided in the databases can be used to calculate point surface pressure coefficients and area-averaged surface pressure coefficients on roof and wall surfaces, as well as foundation loads on low-rise buildings. Aerodynamic databases can be expanded in the future, to incorporate data for less regular geometries; these can come either from systematic studies on a variety of geometries (e.g., [41]), or from specific project-related analyses (e.g., [42]). Currently wind tunnel tests are considered the reliable tool for investigating building aerodynamics, the main concern with Computational Fluid Dynamics (CFD) being the difficulty in calibrating simulations and validating their results. However, with the purpose of building aerodynamic databases, a joint effort within the scientific community might be able to produce standard criteria for simulations, the results of which may in a future complement wind tunnel data. On the other hand, research is currently being developed towards the possibility of using reanalysis data for the definition of extreme wind climate at sites of interest. Numerical Weather Prediction (NWP) models simulate the physics of the atmosphere using available observations, and calculate meteorological variables in a three-dimensional grid extending from the surface to the stratosphere. These models have traditionally been used for weather forecasting, but they may also be rerun to produce a set of historical data. For example, the Integrated Forecast System (IFS) at the European Centre for Medium-Range Weather Forecasts has been rerun to produce a global reanalysis from 1979 to present at a horizontal resolution of 31 km, known as the ERA5 reanalysis [43]. The resolution of ERA5 is too coarse for calculating extreme values at a specific site, and downscaling to a higher resolution is hence required. This can be accomplished by rerunning the NWP model at higher resolution within the ERA5 dataset, which is called dynamical downscaling. An example, the NORA10 dataset [44] was created by running a NWP model with 11 km resolution, covering most of Northern Europe. The dataset is currently being updated with new model runs at 3 km resolution. Dynamical downscaling has the advantage that the physical consistency between the different variables is retained, but it is computationally demanding therefore it is not suitable to produce a long dataset. A cheaper alternative is running a high-resolution model for a shorter period, and use the short dataset for finding a statistical relationship between the short high-resolution dataset and the long low-resolution dataset. This method is called statistical downscaling [45]. The quality of these modelled dataset depends on the NWP model used and on its resolution, as well as on the methods used in statistical downscaling and interpolation; validation of the data against observation is clearly necessary for these datasets to become of practical use. In particular, it is observed that some models tend to underestimate the strongest winds. Not all NWP datasets include wind speed and direction as an explicit output. Examples of datasets of possible use when assessing wind loads are: the SMHI HARMONIE-ALADIN, covering the entire of Europe for the period 1961 to 2016 at a horizontal resolution of 11 km [46]; the MESCAN-SURFEX analysis, covering the period from 1961 to 2015 at 5 km spatial resolution [47]; the NORA10/NORA3 datasets, covering the period 1957 to 2002 for Scandinavia, Britain and parts of Northern Europe at 11 km resolution and the period 1995 to 2020 at a 3 km resolution [44]; and the Klinogrid dataset, providing hourly wind speed and wind direction on a 1 km resolution grid for the period 1957 to 2015 [45]. The visionary Wind Engineer can therefore think of a future in which web-based apps will access online databases to retrieve aerodynamic and meteorological data, and combine them together to obtain the "best" estimate of the wind load on a project structure; and machine learning and big data analytics could be the tools to achieve that. How far that future can be, and whether we will ever see it we do not know. Data Availability Statement: Publicly available datasets were analyzed in this study. This data can be found here: NIST Aerodynamic Database (accessed on 20 November 2021). Conflicts of Interest: The authors declare no conflict of interest.
9,088.8
2022-02-17T00:00:00.000
[ "Engineering", "Environmental Science" ]
Acoustic Self-Calibrating System for Indoor Smart Phone Tracking This paper presents an acoustic indoor localization system for commercial smart phones that emit high pitched acoustic signals beyond the audible range. The acoustic signals with an identifier code modulated on the signal are detected by self-built receivers which are placed at the ceiling or on walls in a room. The receivers are connected in a Wi-Fi network, such that they synchronize their clocks and exchange the time differences of arrival (TDoA) of the received chirps. The location of the smart phone is calculated by TDoA multilateration. The precise time measuring of sound enables high precision localization in indoor areas. Our approach enables applications that require high accuracy, such as finding products in a supermarket or guiding blind people through complicated buildings.We have evaluated our system in real-world experiments using different algorithms for calibrationfree localization and different types of sound signals. The adaptive GOGO-CFAR threshold enables a detection of 48% of the chirp pulses even at a distance of 30m. In addition, we have compared the trajectory of a pedestrian carrying a smart phone to reference positions of an optic system. Consequently, the localization error is observed to be less than 30 cm. Introduction From the sustained rise and ubiquitous availability of mobile computers, smart phones, and handheld devices in everyday life, a multitude of exciting new location-dependent applications have emerged.Context sensitive applications support the user in everyday life.One of the most important contexts is user location for navigation.The demand for navigation in large structures as railway stations, airports, trade fair halls, or department stores is obvious, since the equipment, the mobile device of the people, is already available. The GPS-Module in commercial off-the-shelf (COTS) smart phones and hand-held devices makes navigation systems reliable to assist in outdoor areas [1].The demand of localization systems begins to shift towards closed scenarios.For indoor environments, there is the need for new localization approaches, since the reliability of GPS vanishes in densely built-up urban areas and is completely void inside buildings.In addition, to effectively navigate people in their environments, for example, to specific products in a supermarket or to particular exhibition booths on trade fairs, a more accurate localization system as GPS is needed.Hence, for indoor applications alternative technologies are required to provide the signal inside buildings with a low cost infrastructure. Related Work Today several indoor localization systems are available, based on different methods and technologies.Some of these systems work with COTS smart phones.In addition, many participants have already COTS smart phones, which reduces the costs of the localization system.Figure 1 shows an overview of the different technologies and the achievable accuracy of indoor localization systems based on COTS smart phones which were developed by scientific research groups. We use the principles of smart phone localization from our prior work [2] to apply our new developed algorithm (Cone Alignment) and particle filter.Further, we show localization with the integrated inertial measurement unit and compare the results with a reference motion tracking system.Furthermore, we showed in [3] an optimized receiver hardware to increase sensitivity and accuracy of the localization system. Basic Indoor Localization (i) Many present localization systems use radio frequency (RF) signals for localization.The RF systems use the propagation of radio waves for position calculation.Therefore, the existing infrastructure can often be used.In the following, a brief description of indoor localization systems based on three different RF technologies is presented.Otsason et al. used the GSM communication with wide signal-strength fingerprints to locate the user in indoor environments [4]. For the localization, no infrastructure is required, but the accuracy strongly depends on the environment.Another possibility is using the Wi-Fi communication [5][6][7].Current smart phones have a Wi-Fi module implemented to communicate with a network.RADAR [8] operates with the existing multiple Wi-Fi access points.Further, they use the received signalstrength indicator (RSSI) to calculate the distances between the Wi-Fi access points and the mobile phone.The accuracy depends on the number of Wi-Fi access points and the environment.The third technology is Bluetooth, which has the shortest range among the three technologies.However, the technology has some flaws for accurate positioning application.First of all, Bluetooth adjusts the signal-strength when the signal becomes too strong or too weak.Moreover, Bluetooth takes a lot of time to discover new devices.As a result, these restrictions make Bluetooth positioning impractical and not feasible for high precision localization. To sum up, the RF systems are susceptible to errors in dynamic environments.For example, the RSSI value depends on the environment and the smart phone. The RSSI value is distorted by objects in the direct path, in the vicinity and by environmental influences, like air humidity, and so forth.Additionally, the RSSI value also depends on the orientation of the antenna.The antenna directivity is influenced by specific smart phone types and the actual orientation to the anchor nodes.RF localization systems can localize people with low accuracy (1.5 m-3 m).Through combination with other technologies, this accuracy can be improved.The multimethod approach [9] uses a combination of built-in sensors of mobile devices and the capabilities of the end-users, which estimates positions with a scanner application.Redpin considered the signal-strength of GSM, Bluetooth, and Wi-Fi access points on a mobile phone to calculate the position [10]. (ii) An alternative technology is pedestrian dead reckoning (PDR) with inertial sensors.By using the integrated MEMS sensors (accelerometers, gyroscopes), the current position can be calculated recursively based on the measured acceleration and angular rate of the movement.Inertial sensors based localization work without addition infrastructure.However, the errors of the sensors are accumulated during the integration of the measurement values, which increases the localization error with the investigation time [11].Therefore, position calculation based only on inertial sensors is usually fused with an absolute location method.Thus Kim et al. presented a smart phone localization system based on Wi-Fi access points and inertial sensors.Zhang et al. presented a smart phone localization system based only on inertial sensors [12].Different methods were introduced to provide adaptive step lengths detection by analyzing vertical acceleration data.The experimental results showed that the obtained trajectory was able to follow the true path with an error margin of a meter in a walking distance of 45 m.Mautz compared different approaches based on inertial sensors which are integrated in smart phones or in external cases [13].The localization accuracy varies greatly between 0.1% and 20% of the travelled distance and depends on the used methodology (algorithm) and sensors. (iii) Other existing smart phone localization systems use information of the surrounding.Further, the magnetic field fluctuations and anomalies inside buildings [14] can be used to create landmarks for localization.Another possibility is using the fluorescent light as a medium to transmit position information by using a pulse-frequency modulation technique [15].Hence, a smart phone can receive the encoded light information through the integrated camera and can calculate the position.It is also possible to use only the visual information of the surrounding [16,17] for localization.The integrated camera of the smart phones is used to create images and compare the images with a database.Moreover, with a simultaneous localization and mapping (SLAM) algorithm the position can be estimated.Thus, no additional infrastructure is needed, but these systems are characterized with a high computational performance.Problems with shaking of the camera during walk and motion blur lead to failures [18,19].Similar or dynamic environments are mostly encountered in densely populated areas, for example, shopping malls, where the localization errors are high.Most of the research groups uses the time of flight (ToF) or round trip time (RTT) measurement for smart phone positioning.However, there are several intrinsic uncertainty factors of a ToF measurement which lead to the ranging inaccuracy.For COTS smart phones, there exists a variable latency, a changeable misalignment between the timestamps of the command from the transmitted signal and the transmitted signal from the loudspeaker.Another problem is the synchronization of the smart phones and receivers.These delays can easily add up to several milliseconds, which imply a ranging error of several cm.Borriello et al. presented the WALRUS [20] localization system, where acoustic sound for PDAs/Laptops at a frequency of 21 kHz was received.The wireless network provides a synchronizing pulse along with the information about the room to determine the location in a room-level accuracy.Liu et al. improved the Wi-Fi localization accuracy with an acoustic ranging [21].The phones are using nearby peer phones as reference points and calculate the relative distances with the acoustic RTT.It means smart phone transmits the impulses and smart phone receives the impulses and transmits a new impulse to the smart phone .For the distance measurement, no synchronization is necessary except for time delay.Liu et al. use this additional distance measurement to increase the accuracy to 1-2 m of the Wi-Fi localizations system.A pure sound localization system is Beep [22] and BeepBeep [23].Peng et al. showed that a localization system can use mobile phones which transmit and receive audible sound impulses between 2 kHz and 6 kHz [23,24].Further, the system needs no additional infrastructure and uses the RTT between the smart phones to measure the distance between different smart phones in a resolution of about 1-2 cm.For the system, the latency is measured and transmitted to the other smart phone.In this case, a very precise position measurement is possible.Rishabh et al. use the loudspeakers which are installed in shopping malls and consumer stores to play music for public entertainment [25].The use of barely audible (low energy) pseudorandom sequences in their approach poses very different challenges to other approaches which use highenergy ultrasound waves.The approach was tested in a meeting room and reported a promising initial result with localization accuracy of 50 cm. Sound Indoor In most of the state-of-the-art systems, the anchor nodes are used as transmitters.Those receivers detect the sound signal emitted by the anchor nodes.However, this method suffers from certain disadvantages. (i) The sound signals are received at different positions during a movement (see Figure 2).Thus, the mobile device needs the information of the environment, especially the positions of the beacons to calculate the own position. ( [44]. the band between 80 Hz and 12 kHz).Outside this frequency range, the microphone has low sensitivity to receive sound from larger distances.Additionally, there exists a maximum sampling rate of the analog to digital converter of COTS smart phones.The corresponding sampling frequency needs to be greater than twice of the maximum signal frequency.As a result, the sound emitted by the handheld device lies in the audible range, detectable by the user.Furthermore, this frequency band is crowded with natural sounds, making it more difficult to distinguish the localization signal from noise. (iii) Due to permanently receiving the sound signals by the mobile device, an increased power consumption on the mobile side is necessary for signal identification and calculation [26,27]. System Overview In the presented work, the practical implementation of the concept acoustic self-calibrating system for indoor smart phone tracking (ASSIST) as discussed in [2] is considered.Using this concept, the above-mentioned disadvantages (Chapter II-B) were avoided.The proposed indoor localization is schematically shown in Figure 3.The system works with COTS smart phones and requires no additional equipment from the user.The following is a brief description of the system. In ASSIST, the smart phones generate sound impulses beyond the human audible range.The sound impulses were received by self-built receivers which can be placed at the ceiling or on the walls of a room.A minimum of three receivers is required to localize a mobile phone in one localization cell in two dimensions.The receivers were connected to a Wi-Fi network to synchronize the timestamps of the incoming signal.Additionally, the receivers were connected with a wireless network to an evaluation unit.The evaluation unit is connected to the smart phones via cellular communication (GPRS/UMTS/LTE), which serves the ID of the specific sound and provides the map with the actual position of the user.In ASSIST, the absolute acoustic localization system is supported by the integrated inertial sensors.In areas where no receivers are available, the integrated inertial sensors can be used to localize the user for short periods. Human Sense of Hearing at High Frequencies. In an applicable localization system based on sound signals, the frequency range of the used signals should be outside of the audible range.Choosing the correct frequency range is therefore essential.The following section elaborates different frequency ranges of human cognition and various hearing thresholds.Human hearing capability is the best at frequencies where most of the speech takes place, which is around 0.5-6 kHz.The absolute hearing threshold defines the minimum sound pressure level, which a pure tone needs to have in order to be recognizable for a human being.Sakamoto et al. have conducted measurements of the absolute hearing threshold in the frequency range from 8 to 20 kHz, for different age groups [28].In the range from 18 to 20 kHz they reported average hearing thresholds between 112 and 148 dB SPL (sound pressure level).One should note that the hearing threshold was measured under laboratory conditions.For the case of background noise (typical environment of a crowded building), the hearing threshold will be raised through masking. To evaluate the audibility of high frequency sound signals emitted by smart phones, we measured the sound pressure level of different commercial smart phones for different frequencies and distances.The sound pressure level values of the smart phones were then compared to the lowest values of average hearing threshold and corresponding standard deviation .The difference between smart phone sound pressure and average hearing threshold for a specific frequency was calculated in units of (Table 1). As expected, the audibility of the sound signals is worse when frequency increases and the distance to the measured smart phone as well.The measurements and calculations show that with a chance of 0.13% (3.5) a sound signal with 18 kHz can be heard in a distance of 5 m in a quiet room.Therefore, for the least upper bound of the auditory threshold, a frequency of 18 kHz is chosen to guarantee that the signal is outside of the audible range. 3.2. Transmitter.In our system, the smart phone speakers transmit the sound signals for the localization.To analyze the maximum frequency limitation and the maximum acoustic bandwidth of a smart phone speaker, several COTS-smart phones were tested.Therefore, the frequency response and the radiation characteristic were measured. For the measurement of the frequency response, sound with white noise was transmitted from several smart phones and recorded with a broadband measurement microphone Earthworks M50. The frequency response is depicted in Figure 4, which shows a damping factor of 20 dB in a range of 1 kHz to 22.5 kHz.The sound amplitude of frequencies with more than 21 kHz decreases rapidly with higher sound frequency.In addition, Filonenko et al. presented in a study the practical limitations of sound generation with a speaker of a COTS smart phone [29].Frequencies above 22 kHz are significantly affected by noise.Also, our results show that the maximum frequency of localization system based on smart phones is 21 kHz.Up to this limit, the sound signals from the speaker have a high amplitude which enables them to transmit sound over long distances. For the measurement of the smart phone radiation characteristics, the sound signals were measured within a distance of 25 cm from a microphone at different positions.Therefore, a smart phone holder is designed to allow a manual rotation of the smart phone and inclination angle around the holders axes.The smart phone is placed along the horizontal axis.The speaker is located on the opposite side of the measuring microphone.The measurements start at an inclination angle of 0 ∘ and the smart phone rotates around the holders axes with an angle of 15 ∘ .This corresponds to the movement of the microphone along a circle around the smart phone.The advantage of this rotation around the holders axes is its simple implementation.Eventually, the inclination angle of the smart phone is increased to reach 180 ∘ .The 3D measured radiation characteristics are shown in Figure 5 and an axisplot is depicted in Figure 6.As expected, the radiation is anisotropic and has a small directivity into the direction of the ear.The sound pressure is plotted logarithmically.As a reference, the sound pressure is located within a direct orientation of the speaker to the microphone.The reference sound pressure level of 0 dB is assigned to a distance of 35 dB as the origin of the coordinate system. Generating audio signals with a smart phone requires approximately 33 mW [26].Moreover, the signal length is 2 ms and hence the smart phone requires 66 Ws for every transmitted burst.However, 58% of the power is consumed by decoder.Hence, calculating chirps requires less power than decoding audio from an MP3 file.In addition, the smart phone requires 55 mW for transmission of the calculated position to the smart phone.Compared to localization systems, where the position is calculated on the smart phone, the power for listening of the signals and calculation takes approximately 150 mW [26,27] and the CPU load is about 80%.Moreover, the calculation is limited, due to the low power CPU, to simple localization algorithms (not enough computation power for a particle filter).As a result, our localization system benefit from longer battery life of the smart phone and better position estimation (complex algorithms can be run on the server).Using TDOA as the localization principle, the system is independent of the exact transmission time of the pulse.Hence, the operation system requires no modification or patch to ensure deterministic behaviour.On the contrary, localization systems based on TOF or round trip measurements rely on precise transmission and receiving time of the signal.Thus, the operation system is patched to ensure deterministic real-time behaviour.Consequently, the user does not need root rights to modify the operating system. Receiver. Ten prototype receivers for receiving the sound signals from the smart phones were built.Figure 7 shows the block diagram of the receiver.The receivers calculate the low level signal processing (correlation, threshold) and the localization is calculated on a central server. The first part in the signal chain of our receivers is a transducer, which converts acoustical signals into electrical signals.The designed system uses a small, low cost transducer, powered by a maximum voltage of 5 V. Further, MEMSmicrophones from Knowles Acoustics were used and the sensitivity as a function of frequency was calculated and compared for different measurements as depicted in Figure 8.The MEMS-microphone shows a peak around 20 kHz.For detecting sound signals in the range of 18-22 kHz, the use of this MEMS-microphone is preferred. An 8th order Butterworth low-pass filter with a cut-off frequency of 17.5 kHz was used to eliminate ambient noise.Before digitizing the data, the signal is analog amplified by a factor of V = 414, which is a trade off between sensitivity and false detections.Subsequently, the sound signals were digitized using an analog digital converter (ADC) having a resolution of 15 bits per sample and a sampling rate of 88.15 kHz.The digitized signals are correlated and the threshold is applied to the result.Further, the peaks and the IDs (identification numbers) of the sound signals are estimated and the timestamps are transmitted from the receiver (Figure 9) via Ethernet-Interface to a central server (e.g., notebook). To determine the time of arrival (ToA) of the received sound impulses, a precise time synchronization is needed, as the accuracy of the localization system relies on synchronization precision between the receivers.The receivers are connected to a Fast-Ethernet network to synchronize their clocks.The connected receivers (slaves) negotiate a master receiver which acts as a time reference.Subsequently, the other clients (receivers) adjust their clocks to the master considering time offset and time drifts.The slaves ping to the master to get the current time of the master via UDPprotocol.This time is corrected by round trip time from the slave.Time offset and the time drift are both considered by an adaption of the Network Time Protocol algorithm.Both time offset and clock drift between slave and master are obtained by linear regression from the set of the time stamps.The implementation of synchronization can be found in [30].With a 802.11 b/g Wi-Fi connection, a synchronization precision of greater than 0.1 ms can be achieved [31].As a result, the theoretical localization synchronization error for the speed of sound (340 m/s) is 3.4 cm. Figure 10 shows the opening angle of the receivers.Therefore, measurement data from a localization experiment with 10 receivers was used.The positions of the smart phones emitting the received signals are plotted relatively to the receiver.All receivers are located in the same position in the center of Figure 10 and aligned in the same direction, marked with a cross.The opening of the microphone is in the positive -direction.Thus, the figure shows the positions of the smart phone, where the corresponding receiver was able to detect the signal that was sent out by the smart phone.This can be seen as the opening angle of the receivers.The opening angle depends mainly on the directivity of the microphone and on the detection threshold of the receiver.As expected, the microphone receives the highest number of signals in the direction of the microphone.Going towards the back of the receiver, the number of received signals decreases and has a minimum at 180 ∘ from the front. Software Application. We have developed an Android software application (app), which transforms a standard COTS device into a transmitter for ASSIST.Fundamentally our designed application has three functionalities: (I) communication with the evaluation unit (server), (II) sound control, (III) and visualization of the current position on the map. (i) The system works when the user downloads and starts the app in an area which supports the ASSIST infrastructure.The user interface is simple as one starts the app, which connects to an evaluation unit and receives an ID using its internet connection.Every registered hand-held device in a localization cell is assigned a unique ID.The smart phone is connected to the internet without a special infrastructure, only a mobile network is mandatory.In this work long term evolution (LTE) is used for wireless data communication which is the latest standard technology of mobile data transmission.The smart phones and the server communicate using the secure communications protocol HTTPS in JavaScript Object Notation (JSON) format.Specific parameters were assigned to each user, such that several devices can be distinguished by the appearance of the chirps.The necessary parameters conceived from the evaluation unit are frequency, impulse impuls , interval duration of the chirp signal, and building map.Based on these data, the smart phone regularly sends out the chirp signal to guarantee localizing the user. (ii) The app controls the loudspeaker of the smart phone and generates the specific sound signals (which is described in chapter 4) inside the smart phones. (iii) The current position and the map are transmitted from the evaluation unit to the smart phone.The position of the user is displayed on the screen of the smart phone in context to the environment, with a map and surrounding items.Figure 11 shows an example of the software application on a smart phone screen.The current position of the user is shown with a dark red point with minimal transparency.The trajectory of the user is shown with decreasing transparency of the red points.The previously calculated data points are more transparent than the actual points.This allows the user to visualize his walk in a chronological sequence.Depending on the connection speed, the positions of the user are provided in real time from the evaluation unit.Displaying the current position on the smart phone has a time latency of approximately 12 ms to 410 ms.This is due to the window size of the signal processing ( wind ∈ (2 ms, 400 ms)), the calculation (1-2 ms), and the transmission of the data to the smart phone by Wi-Fi (2-10 ms). Localization with TDoA In our approach, we use TDoA-Algorithms to calculate the position of the smart phones.When using TDoA-Algorithms for localization, the processing time inside the smart phones is not relevant.For using other localization algorithms, the position accuracy would be affected in a negative sense if the processing time is not measured.Only by knowing the propagation speed of sound and the precise arrival times at the the position of the smart phone device can be calculated.The receivers are connected to a Wi-Fi network, such that they synchronize their clocks and exchange the time differences of arrival of the received sound impulses.A smart phone transmits acoustic signals at a position 0 relative to the receivers with the positions ( = 1, . . ., ).Further, the receivers detect the signals at different timestamps , which depend on the distance between the receiver and transmitter.Moreover, the distance from the smart phone to the receiver can be described by the coordinates as follows: The speed of sound air can be calculated in the air according to the following equation: The speed of sound depends on the temperature of the environment.At a temperature of 25 ∘ C the speed of sound is 346 m/s.The receivers generate timestamps in the time of arrival of the received signal. In case of using sound waves instead of electromagnetic waves, the influence of the position accuracy from the synchronization of the receiver is decreased.The synchronization of the receivers is necessary for generating the timestamps for the TDoA-Algorithms.The receivers are connected together via wireless network (WLAN) which provides a precise time synchronization up to an order of 0.1 ms.Hence, the theoretical maximum localization error, caused by synchronization error, is 3.4 cm. Smart phones generate specific sound signals at time 0 .Thus, the distance from (1) can be calculated by multiplying the speed of sound air with the transmitted time − 0 as given in the following equation: Time 1,2 shows the time difference of the received signal between receiver 1 and receiver 2 Equation ( 4) is the hyperboloid description for 2 receivers.As a result, iterative TDoA-Algorithm with a minimum of 3 receivers can calculate the location of the smart phones in 2D. Envelope Detection and Particle Filter.Our first approach uses only the amplitude of an incoming sound signal to detect its presence.Therefore, the smart phone generates short sound impulses with 18 kHz. The approach of using envelope detection of sound signals is relatively easy but suffers from different drawbacks.The amplitude of sound decreases rapidly with distance.In the presence of background noise, one cannot distinguish between wanted and unwanted signals.Figure 12 shows the functional diagram of the signal processing and Figure 13 shows the threshold detection. To increase the robustness against measurement outliers and incorrect initialization, we implemented a particle filter for localization of the smart phone.The algorithm is described in [32].Our method is robust against measurement outliers and incorrect initialization.This is achieved through a probabilistic sensor model for TDOA data which explicitly considers the measurement uncertainty and takes into account disproportional errors caused by measurement outliers. Chirp Impulse and Self-Calibration. In a second approach, we use a chirp impulse to increase the performance of the system by using pulse compression. Chirp Impulses. We use linear chirp signals to transmit the sound signal.A linear chirp is a signal in which the frequency increases or decreases linearly with time (up-and down-chirps).Some of their characteristics make them applicable for localization.Signals with maximum energy are essential for receiving short signals over large ranges.The influence of interfering signals or white and Gaussian noise can be reduced by increasing the signal energy, where the signal-to-noise (SNR) ratio is increased.The increase of Figure 14: Description of the signal processing with the chirp (blue block in Figure 7).The signal is multiplied in the Fourier domain to estimate the peaks. signal energy can be done either by increasing the signal amplitude or the signal length.In radar or sonar applications, chirp signals are used to increase the SNR for a given bandwidth. When autocorrelating a linear chirp signal, the resulting function shows a high and narrow peak.This characteristic allows high temporal accuracy for detecting signals.Cross correlating chirps in different frequency bands or up and down chirps, the resulting function does not show a distinct peak.This characteristic can be used to have multiple emitters operating at the same time.References [23,28] show this for the detection of sound and ultrasound signals. The chirp impulse works between 0 ≤ ≤ with a start frequency of 0 and an end frequency of 1 .It can be described according to the following equation: The received signal is cross correlated with a stored up and down reference chirps.The mathematical formula for crosscorrelation of two signals and is where () is the received signal and () is the saved reference signal.Further, the maximum of the cross-correlation function () is achieved at the perfect matched time.Hence, we use a matched filter to maximize the SNR.To detect different smart phones, up and down chirps are used to transmit the ID of the specific smart phone as a binary data stream.The cross-correlation is carried out as a convolution, which in turn equals a multiplication of the two signals in the frequency domain.The spectra of the input chirp and reference chirp are calculated with the fast Fourier transform (FFT).After multiplying the spectra of input chirp and reference chirp, the inverse FFT is used to convert the signal back to the time domain.Figure 14 shows the principle of the threshold function in the frequency domain.When a chirp, equal to the reference chirp, is present in the FFT-window, a peak occurs in the output signal.The position of the peak can be related to the time, when the input chirp was arriving at the receiver.Comparing the times of arrival of multiple receivers, one can realize time difference of arrival (TDoA) based localization.Using a constant static threshold limits the transmission range by a high value to reduce false detections.However, an adaptive threshold, which detects the presence of the signal and increases the threshold and decreases the threshold for lower signal values can improve the sensitivity of the system.Moreover, an adaptive threshold can also reduce false detection by echoes due to increasing the threshold after the receipt of the signal.Therefore, we modified the constant failure alarm ratio (CFAR) algorithm to calculate the adaptive threshold [33].Furthermore, we used only the maximum values of the windows to take from both windows the greatest value.Figure 15 shows the principle function of the cfar algorithm.The algorithm takes two windows, one before the point and one after the threshold point.Then, the greatest value is taken in each window (greatest of, GO) and is compared with the minimum noise level . Figure 16 shows the GOGO-CFAR threshold for low correlation amplitudes for a distance of 20 m.Hence, the peaks are detected and the threshold is above the noise level.Furthermore, false detections by echoes are reduced by increasing the window size.This is shown in Figure 16 at the time 0.91 s to 0.94 s.Where some echoes causes a high correlation value; however, the threshold is above this disturbance. Calibration Phase. Absolute localization systems uses fixed and installed anchor nodes as infrastructure.Further, the localization system has to know the position ( , ) of every receiver (anchor node) a priori.A multilateralists TDoA-Algorithm requires position information of the receivers to calculate the relative position (, ) of the mobile object (e.g., smart phone).Normally, the system customer has to measure the exact positions of the receivers, which is required for installation.This measurement increases for large buildings, since the number of receivers depends on the size of the building.The localization system ASSIST uses an Anchor-free localization algorithm to calibrate the system.During the calibration phase, the positions of the receivers in the indoor scenario are calculated automatically.Moreover at least three measured receiver positions are required for the orientation of the system on a map.The sending time of signals by a smart phone are not known to the localization system, as this would require synchronization of the smart phone using a very unreliable network, or bidirectional exchange of sound signals.Only the times of reception of the signals at the receivers can be measured.However, the receivers are synchronized and the TDoA values can be calculated.This forms a system of hyperbolic equations for the signal position and any pair of receivers.The goal of the calibration phase is to approximate the relative positions of the receivers, with respect to the map.There are several self-calibrating TDoA-Algorithms available to calculate the positions of the receivers (anchors).In the far field case, the signals originate from the distance, such that the propagation front of the signals approximates a line, sweeping over the receivers.Then, the positions of receivers and subsequently of the signal directions can be calculated directly [34][35][36]. For the general case of arbitrarily distributed signal positions [37] proposed a solution, which maximizes the likelihood of receiver and signal positions, given a Gaussian distribution of measurement errors.For at least eight receivers in the plane or ten receivers in space [38] showed a direct solution using matrix factorization. For the calibration phase of the localization system, an iterative optimization algorithm is used.The "Iterative Cone Alignment" algorithm [39,40] solves iteratively a nonlinear optimization problem of TDoA by a physical spring-mass simulation.The success rate of solving the calculation of the receiver positions was increased to 99.4% (with only six received signals and four receivers).Through using the algorithm, a quick-setup system for smart phone localization is created.There is no need to measure the positions of the receivers. Experimental Results We show measurement results for localization with the acoustic system and a possibility of using an IMU for localization. Constant Frequency Sound Pulse. In the first experiment, we use pulses with constant frequency and use the envelope to detect the presence of the signal.Figure 17 shows the realworld indoor scenario with ten receivers, which were placed around the area of the optical motion capture system.The absolute accuracy of the motion capture system is in the range of about 3 mm [41].Figure 18 shows the smart phone localization by a particle filter.Further, the particle filter localizes the smart phone with an error of = 0.26 m. Figure 19 shows the cumulative distribution of the distance errors.We analyzed in an additional experiment the receiver range.Therefore, an iPhone 4S is positioned at a variable distance between 1 and 12 meters from two receivers.At each distance of 1 meter, the smart phone transmits 500 acoustic pulses.The length of each acoustic pulse is 50 ms with a frequency of 18 kHz.Figure 20 shows the correct received pulses in percentage over different distances.The accuracy that was achieved is within an interval of ±2. Chirp Sound Pulse. The measurement deviation of the system was evaluated in static experiments.Figure 21 shows the distribution of measurement errors and the normal distribution.ASSIST shows a standard deviation of = 25 cm.Signals with multipath propagation lead to increased standard deviation. Further, we verified our system in dynamic real-world scenario, 2D experiment.For a reference, we defined a walking track of 14 m which was exactly measured.In our experiment, we placed seven receiver devices in an oval of 10 m times 10 m around the walking track in a height of 1 m.A person walked along the defined track. We calculated the positions of the smart phone which transmitted acoustic chirp impulses between 19 kHz and 20 kHz with a length of 50 ms.Figure 22 shows the calculated positions and the defined walking track.Thus, the data shows a well match compared to the track.The trajectory shows a systematic error which depends on the localization algorithm and some measurement errors from the multipath propagation. The smart phone track shows an average deviation of 0.34 m ( = 0.18 m). In an additional experiment, the receiver range was analyzed.Figure 23 shows the signal detection rate for arrived chirp signals as a function of distance which are inside of ±2.The developed receivers (chapter III-C) with a constant threshold were able to receive more than 70% of the transmitted signals up to a distance of 16 m from a smart phone (red curve with circles).The percentage of received signals drops at distances above 16 m.Moreover, with the adaptive GOGO-CFAR threshold, the detection rate is approximately 88% at a distance of 20 m (black curve).Furthermore, at a distance of 30 m, we measured a detection rate of 48%. Echo Analysis. Reflections at walls or hard surfaces (e.g., cabinet) induce echoes and disturb the line of sight signal.Furthermore, the echoes reduce the accuracy of the localization system, which is assumed to work on the line of sight signal.Figure 24 shows the echo analysis for different signals. The abscissa represents the distance of the signal and the ordinate time of the measurement.The brightness indicates the amplitude of the correlation.Every line is generated by using the modulo operation of 300 ms, which is the time interval of the transmitter.We started the measurement at approximately 5 m and moved to 16 m and back.In particular, at distances above 10 m, the echoes become strong and are visible as the shadows of the main signal in Figure 24.Thus, multiple (up to four at approximately 16 m) echoes can be distinguished.To achieve a robust localization, we perform the adaptive GOGO-CFAR threshold to remove the echoes and work on the line of sight signal. Localization with Inertial Sensors. In areas where no infrastructure is available, the integrated inertial sensors can be used to localize the user for a short period. International Journal of Navigation and Observation Currently, many different sensor types are integrated in the smart phones.For example, the commercial smart phone Samsung Galaxy S2 provides the data of the integrated inertial sensors like gyroscope, accelerometer, and a magnetic field sensor. User localization based on the inertial sensors leads to measurement errors with increased observation time.The inertial sensor unit additionally supports a method to perform acoustic localization. Since the smart phone is usually held by hands, methods as zero velocity update [42] can not be implemented.In order to deliver correct position information, step length, and orientation information must be determined.For normal walking, each step is set roughly as 0.70 m.The step detection is accomplished by analyzing the accelerations.New step is detected only when the acceleration signal crosses two predefined thresholds with a rising edge.Figure 25 shows the acceleration in -axis during a walk with the two threshold values.The orientation information is obtained by Kalman filter based sensor data fusion, as discussed in [43]. In an experiment, the data from inertial sensors of the smart phone without the ASSIST localization system was used for detecting a walk of 45 m distance in a building.The trajectory of the walk is shown in Figure 26.The red-dashed line shows a reference path which was measured with an inertial measurement unit from Xsens.The blue line shows the calculated path with the data from inertial sensors of smartphone Samsung Galaxy S2.The calculated maximum deviation from the 45 m real track was 1 m. Conclusions In this paper, we presented a smart phone indoor localization system based on sound.The user of the system needs no additional hardware except a COTS smart phone.Through our self-built receivers, which were synchronized with a Wi-Fi network, the arrived signal can be correlated and the position is calculated with a TDoA-Algorithm.The first experiments showed that it is possible to use the system in a real environment to localize a user in an indoor environment with less effort.The system does not require a special knowledge to be installed by the provider; hence, the installation effort is minimized.Through an anchor-free algorithm, the receivers work as a plug-and-play system and there is no need for additional information, since the positions of the receivers can be calculated. In our paper, two different approaches were tested.The envelope detection shows an easy implementation but a limited transmission range of 11 m.The particle filter showed a robust localization of the smart phone with an error of = 0.26 m.In a second approach chirp, impulses were used and correlated in the receivers, which increased the range to 16 m.Moreover, the adaptive GOGO-CFAR threshold extend the detection rate to 48% at a distance of 30 m.Additionally, a self-calibration algorithm is used, which localizes the anchor nodes in a range of = 1.32 m and the smart phone in a range of = 0.18 m. In areas with a poor receiver coverage, the localization with built-in inertial sensors in smart phone was tested.The integrated inertial sensors can be used as an additional localization method to support ASSIST for a short time.The maximum deviation from the reference track of 45 m was 1 m. Outlook In our future investigations, we will improve the acoustic localization in situations where there is no line of sight International Journal of Navigation and Observation between the smart phone and the receivers.Error minimization can be achieved, through fusion of the data from the inertial sensors and the data from the acoustic localization.Additionally, we will use the self-calibration algorithms in combination of the particle filter to increase the robustness. We will modulate an identifier onto the signal to distinguish between different senders and enable multiuser applications.Another possibility is to use time-division multiplexing (TDM) to identify different smart phones.The time domain is divided into time slots which can be used from the respective of smart phones to generate the sound. In some applications (e.g., in a supermarket), the receivers should be installed in the ceiling.In this case, the position of the user should be localized in a 2D area with defined height.The algorithm must be modified for 2.5D applications. In addition, we will improve ASSIST through reducing measurement errors from multipath propagation.The experimental results should be evaluated with a reference system to measure the systematic error precisely. 2 InternationalFigure 1 : Figure 1: Overview of localization system based on smart phones. Figure 3 : Figure 3: Overview of the ASSIST system with smart phone, the network of receivers and an evaluation unit. Figure 4 : Figure 4: Frequency response of commercial available smart phones. Figure 7 : Figure 7: Block diagram of the receiver with signal processing (described in Section 4) and Wi-Fi communication. Figure 8 :Figure 9 : Figure 8: Frequency response of electret-and MEMS-microphones in the range from 500 Hz to 25 kHz. Figure 10 : Figure 10: Opening angle of the receivers. Figure 11 : Figure 11: The developed Android software application on the screen of a smart phone. Figure 12 :Figure 13 : Figure12: Description of the signal processing with envelope detection (blue block in Figure7).The absolute value of the signal is integrated over the floating window with the size wind . Figure 15 : Figure 15: Adaptive threshold calculation by GOGO-CFAR.The greatest value is taken from both windows. Figure 16 : Figure 16: Adaptive threshold GOGO-CFAR for real signal with small correlation amplitude (20 m distance).Echoes and noise is suppressed. Figure 17 :Figure 18 : Figure 17: Photo of the real-world experiment environment. Figure 19 :Figure 20 : Figure 19: Cumulative distribution of distance errors for Figure 18. Figure 21 :Figure 22 : Figure 21: Measurement errors of ASSIST with correlation and TDoA-Algorithms in a static experiment. Figure 23 :Figure 24 : Figure 23: Measurement values of ASSIST with chirp correlation depending on the distance.Black line is with adaptive GOGO-CFAR threshold and red line with circles shows the results for constant threshold. Figure 25 : Figure 25: Measurement values of the acceleration sensor in -axis during a walk with the two threshold values. m 2 12.10 m 2 Figure 26 : Figure 26: Trajectory of data from an experiment.Data from the inertial sensors of a smart phone (blue line) and a reference inertial measurement unit (red-dashed line). Table 1 : Difference between average hearing threshold and sound pressure level emitted by smart phones in units of the standard deviation.
10,063.2
2015-02-26T00:00:00.000
[ "Engineering", "Computer Science" ]
Sterilization Induced Changes in Polypropylene-Based Ffp2 Masks In the context of the SARS-CoV2 pandemic and because of the surgical and FFP2 mask (equivalent to the American N95 masks) shortages, studies on efficient sterilization protocols were initiated. As sterilization using irradiation is commonly used in the medical field, this method was among those that were evaluated. In this work, we tested irradiation under vacuum and under air (under both γ-rays and e-beams), but also, for acceptance purposes, undertook washing prior to the e-beam irradiation sterilization process. This article deals with the modifications induced by the sterilization processes at the molecular and the macromolecular scales on an FFP2 mask. Fourier transform infrared spectroscopy in attenuated total reflectance mode, size-exclusion chromatography and thermal-desorption–gas chromatography–mass spectrometry were used to characterize possible damage to the materials. It appeared that the modifications induced by the different sterilization processes under vacuum were relatively tenuous and became more significant when irradiation was performed using γ-rays under air. Introduction Due to the COVID-19 pandemic, an unexpected and huge quantity of surgical and FFP2 masks was needed in a very urgent timeframe, and mask production turned out to be overstretched by these events. For this reason, numerous task forces [1][2][3][4] were looking to find a way to sterilize used masks to be able to reuse them. Since sterilization is usually performed in the medical field using either γ-rays [5,6] or e-beam [7] irradiation, it initially seemed very interesting to sterilize masks using this protocol. Moreover, for acceptance purposes, washing prior to the sterilization process was found to be unavoidable and has already been addressed in recent papers [8]. However, it is well-known that irradiation induces changes in organic materials at the molecular, macromolecular and microscopic levels. Due to surgical and FFP2 masks being made of different layers of polypropylene, it was essential to ascertain the materials' resistance to possible sterilization protocols, since PP is actually well known to be quite sensitive to radiolytic degradation [9,10]. Answers to this question arose during each previous pandemic-the last one being the Influenza one in 2009 [11][12][13]-but fell into disuse either when the pandemic was disappearing or when the mask production reached a sufficient level. Hence, this question must be addressed either for knowledge transfer purposes or for ecological reasons. The filtration performances of masks first depend on their electrostatic properties together with their mechanical resistance. The filtration aspect was recently addressed in a previous paper [14], but the macromolecular aspects were less covered. Thus, we decided to pay attention to this second aspect through a screening of sterilization treatment on polymer architecture. Since the plasticity in PP was shown to be closely linked to residual molar mass values and crystallinity [15], we decided to focus on the modifications induced by the sterilization treatments at the macromolecular scale, together with a screening on the possible structural changes at the molecular scale, particularly dealing with the irradiation effect, due to this being less covered in existing reviews [16]. Figure 1a presents a picture of the FFP2 medical mask manufactured by Valmy (Mably, France) and kindly supplied by Grenoble Hospital (see previous work [8]). Those masks are composed of four layers, which will be characterized before and after irradiation (Figure 1b). this question must be addressed either for knowledge transfer purposes or for ecological reasons. Materials The filtration performances of masks first depend on their electrostatic properties together with their mechanical resistance. The filtration aspect was recently addressed in a previous paper [14], but the macromolecular aspects were less covered. Thus, we decided to pay attention to this second aspect through a screening of sterilization treatment on polymer architecture. Since the plasticity in PP was shown to be closely linked to residual molar mass values and crystallinity [15], we decided to focus on the modifications induced by the sterilization treatments at the macromolecular scale, together with a screening on the possible structural changes at the molecular scale, particularly dealing with the irradiation effect, due to this being less covered in existing reviews [16]. Figure 1a presents a picture of the FFP2 medical mask manufactured by Valmy (Mably, France) and kindly supplied by Grenoble Hospital (see previous work [8]). Those masks are composed of four layers, which will be characterized before and after irradiation (Figure 1b). Sterilization Protocols In this work, various sterilization protocols were tested. They are summarized in Table 1. In case of the γ-ray irradiation performed at ArcNucleart, the dose rate was about 1.0 kGy h −1 , whereas in the case of the 10 MeV e-beam irradiation performed at IONISOS (Chaumesnil, France) using a Mevex A29 device (34 kW, 10 MeV), the dose rate was several hundreds of kGy min −1 . When applicable, the washing conditions were as follows [8]: 1 h and 12 min at a constant temperature (60 • C), using either pure water or neutral detergent, with "Ultimate mineral" surfactants (1 mL/kg) added, along with "Ultimate Forte" disinfectant based on perchloric acid and hydrogen peroxide (5 mL/kg). Ultimate detergent was chosen because it is commonly used in hospitals. When applicable, vacuuming was performed with a vacuum sealer, Model G210, with vacuum seal rolls from the same supplier (KitchenBoss, Shenzhen, China). Fourier Transform-Infrared Spectroscopy (FTIR) Fourier Transform InfraRed spectra of the polymers were acquired using a Bruker Vertex 70 spectrometer equipped with a Specac Golden Gate single reflection diamond attenuated total reflectance (ATR) accessory and a DTGS (Deuterated TriGlycine Sulfate) detector. Acquisition was performed between 4000 and 500 cm −1 by adding 64 scans with a 4 cm −1 resolution. Although being a non-quantitative method, this technique was chosen due to its ease of use and lack of need for sample preparation, together with its ability to give significant information on the molecular and morphological changes. Molar mass values in PS equivalent were converted in PP equivalent using the universal calibration equation [17]: The different layers of the masks were analyzed in duplicate. The obtained uncertainties were quite small: the differences between the two analyses were lower than 5% for any given sample. The entire analytical procedure was controlled using MassLynx software (Waters, MA, USA). Characterization at the Molecular Scale The four layers of the Valmy FFP2 mask were analyzed by Fourier transform infrared spectroscopy in Attenuated Total Reflectance mode. According to their FTIR spectra, presented on Figure 2, the four layers corresponded to polypropylene fibers [20][21][22]. Table 2. Attribution of the additional bands observed in the infrared spectra of layers 2 and 3 of the Valmy FFP2 mask (Lifshutz, 1997). ν: stretching; δ: bending. Absorption Area (cm −1 ) Amide II Bonds' Attribution Bands Observed on Layers 2 and 3 of the Valmy FFP2 Mask On layers 2 and 3, some additional bands, centered at 3295 cm −1 , 1640 cm −1 and 1560 cm −1 , were observed. They clearly indicate the presence of an additional molecule, which probably belongs to the amide family [23]. This is well ascertained in Table 2, which gives the amide II bonds' characteristic positions along with the bands observed in the infrared spectra of layers 2 and 3 of the Valmy FFP2 mask, and also seems to be supported by the fact that this kind of molecule is known to be an effective process agent [24]. The bands at 1640 cm 1 were slightly shifted compared theoretical values but we believe that the difference was due to the associated bond rather than the free one. Table 2). Table 2. Attribution of the additional bands observed in the infrared spectra of layers 2 and 3 of th Valmy FFP2 mask (Lifshutz, 1997). : stretching; : bending. Absorption Area (cm −1 ) Amide II Bonds' Attribution Bands Observed on Layers 2 and 3 of the Valmy FFP2 Mask On layers 2 and 3, some additional bands, centered at 3295 cm −1 , 1640 cm −1 and 156 cm −1 , were observed. They clearly indicate the presence of an additional molecule, which probably belongs to the amide family [23]. This is well ascertained in Table 2, which give the amide II bonds' characteristic positions along with the bands observed in the infrared spectra of layers 2 and 3 of the Valmy FFP2 mask, and also seems to be supported by th fact that this kind of molecule is known to be an effective process agent [24]. The bands a 1640 cm 1 were slightly shifted compared theoretical values but we believe that the dif ference was due to the associated bond rather than the free one. The additives identified in layers 2 and 3 were characteristic of molecules added to PP for the melt-blowing process. Moreover, it is known that FFP2 masks are generally composed of melt-blown and spun layers. With these two indications, the deduced Table 2). The additives identified in layers 2 and 3 were characteristic of molecules added to PP for the melt-blowing process. Moreover, it is known that FFP2 masks are generally composed of melt-blown and spun layers. With these two indications, the deduced composition was SMMS (spun/melt blown/melt blown/spun, from layer 1 to layer 4, respectively). These attributions are supported by the fact that the compositions of such masks are known to comprise multiple layers of polypropylene nonwoven fabrics, and that melt-blowing is a conventional fabrication method of micro-and nanofibers that is applicable for filtration [25]. After the different sterilization protocols, it appeared necessary to investigate the absence of significant evidence of PP-fiber degradation. Figure 3 displays zoomed-in depictions of the 1800-1600 cm −1 (carbonyls) and 1100-700 cm −1 (C=C double bonds) areas of the infrared spectra of two of the four layers of the FFP2 Valmy mask, that is, layer 2 (left column of Figure 3) and layer 4 (right column of Figure 3). Entire infrared spectra are presented in Figure S5 presented in Figure S5 of the Supplementary Materials. The two other layers (layers 1 and 3) are not presented but the results are very similar (see Supplementary Materials- Figure S6 for the entire infrared spectra and Figure S7 for the zoomed-in depictions of the areas 1800-1600 cm −1 and 1100-700 cm −1 ). For all figures, changes in the chemical structures of the PP fibers are presented on the first line for irradiations performed under vacuum, on the second line for irradiations performed under air, on the third line for Valmy FFP2 masks that were washed with pure water prior to irradiation, and on the fourth and last line for masks that were washed with Ultimate detergent prior to irradiation. It can be observed from Figure 3 that whatever the irradiation nature, the dose and/or the washing conditions prior to irradiation, the PP fibers of the different layers of the mask showed no evident modifications at the molecular level since neither the disappearance (for example, of C-H) nor the creation (for example, of the carbonyl groups) of an infrared band was observed. Some bands showed slight changes upon the different sterilization protocols, which might have originated from the morphological evolution of the polymer chains. It is It can be observed from Figure 3 that whatever the irradiation nature, the dose and/or the washing conditions prior to irradiation, the PP fibers of the different layers of the mask showed no evident modifications at the molecular level since neither the disappearance (for example, of C-H) nor the creation (for example, of the carbonyl groups) of an infrared band was observed. Some bands showed slight changes upon the different sterilization protocols, which might have originated from the morphological evolution of the polymer chains. It is known from the literature [26,27] that changes in PP crystallinity can be evaluated using FTIR spectra via the following relation: where A X represents the absorbance of the band merging at the wavenumber X cm −1 . Hence, crystallinity changes in the different layers were evaluated. Corresponding values are gathered in Table S1 of the Supplementary Materials, while crystallinity values are given in Table 3 of Section 3.2. Characterization at the Macromolecular Scale Even in the absence of detection of trackers evidencing the degradation of PP fibers, it seemed interesting to quantify the damages at the macromolecular scale from the measurement of the average molar mass to ascertain the residual ductility of the FFP2 masks. It is actually known that PP embrittlement occurs at low oxidation level [28]. Table 3 gathers the average molar masses of the different layers sterilized masks along with the crystallinity ratio of the different layers that were obtained by FTIR analysis (from Table S1 of the Supplementary Materials) for each kind of sterilization condition. Layer 1, which is very thin, was hard to sample. Since it was not expected to play a significant role on the mechanical behavior of the sample, it was no longer investigated in terms of microstructural changes. Let us recall that for PP, a critical molar mass value M' C~1 50 kg.mol −1 was proposed as end-of-life criterion corresponding to a strong loss of plasticity below this value [15]. Hence, in case of the analyses of the unsterilized mask (L301), it could be observed that layer 4 of both masks was made of "long PP chains" and was likely to display a plastic behavior. In principle, fibers should display no cracks at the surface, but SEM observations should be conducted to verify this [29]. This was not the case for the inner layers (layers 2 + 3), which had lower average weights and molecular weights, and thus, were expected to display more brittle behaviors. Moreover, it can be noticed from Table 3 that e-beam irradiation under inert atmosphere led to modification of the length of the polymer chains, but the observed changes remained minor even for a 60 kGy dose. The presence of oxygen during e-beam irradiation led to the more severe drop of the average molar masses of the different layers of the FFP2 masks. However, it can be observed from Table 3 that the most important effect at the macromolecular scale was obtained for the γ-rays irradiation sterilization under vacuum and under air, the effect of the oxygen effect being even more marked in case of γ-ray irradiation sterilization. The molar mass decreased faster in the presence of oxygen than under vacuum/inert atmosphere. The results for the L308 and L311 masks indicate that there was no-or almost no-effect of washing on FFP2 masks, whatever the washing conditions (i.e., with pure water, or with water and Ultimate detergent). The effect that was observed for the L310 and L313 masks seemed to be more linked to the irradiation process (L305) than to washing. Hence, it could be deduced from Table 3 that PP masks sterilized by washing were expected to keep their mechanical resistance. More precisely, it is possible that irradiation under air followed by washing could be much more detrimental to mechanical properties due to the leaching of short chains produced by chain scission reactions, and maybe because of the stronger surface oxidation, the latter being associated with surface cracking [29]. It was nonetheless observed in the literature that such effects associated with water can be observed at very long ageing times [30,31], whereas shorter ageing durations in water (1 to 4 h at 60 • C in solution of NaOH 10% at pH 14) were found to induce almost no effect [32]. Moreover, Table 3 recalls the crystallinities obtained from the infrared spectra. The analysis of the crystallinity evolution indicated that all the treatments seemed to slightly increase the crystallinity of the fibers, which might have been a consequence of the chain scission events evidenced here by GPC. This can be interpreted as follows: PP is in the rubbery state in the experimental conditions used for sterilization, and short chain segments can join the crystalline phase [15]. As slight as it may be, this evolution is an indication of the degradation of the FFP2 Valmy masks. To explain the observed trends, we can recall that when PP is irradiated under an inert atmosphere, the main defects formed include the formation of double bonds (trans-vinylenes, polyenes) and the release of gases (principally hydrogen and methane), as depicted in Scheme 1, together with scission and crosslinking [33,34]. no-effect of washing on FFP2 masks, whatever the washing conditions (i.e., with pure water, or with water and Ultimate detergent). The effect that was observed for the L310 and L313 masks seemed to be more linked to the irradiation process (L305) than to washing. Hence, it could be deduced from Table 3 that PP masks sterilized by washing were expected to keep their mechanical resistance. More precisely, it is possible that irradiation under air followed by washing could be much more detrimental to mechanical properties due to the leaching of short chains produced by chain scission reactions, and maybe because of the stronger surface oxidation, the latter being associated with surface cracking [29]. It was nonetheless observed in the literature that such effects associated with water can be observed at very long ageing times [30,31], whereas shorter ageing durations in water (1 to 4 h at 60 °C in solution of NaOH 10% at pH 14) were found to induce almost no effect [32]. Moreover, Table 3 recalls the crystallinities obtained from the infrared spectra. The analysis of the crystallinity evolution indicated that all the treatments seemed to slightly increase the crystallinity of the fibers, which might have been a consequence of the chain scission events evidenced here by GPC. This can be interpreted as follows: PP is in the rubbery state in the experimental conditions used for sterilization, and short chain segments can join the crystalline phase [15]. As slight as it may be, this evolution is an indication of the degradation of the FFP2 Valmy masks. To explain the observed trends, we can recall that when PP is irradiated under an inert atmosphere, the main defects formed include the formation of double bonds (trans-vinylenes, polyenes) and the release of gases (principally hydrogen and methane), as depicted in Scheme 1, together with scission and crosslinking [33,34]. Scheme 1. Mechanism of formation of trans-vinylene, hydrogen and methane from a radiolyzed PP [33,34]. The balance between chain scission and crosslinking is given as [35]: Scheme 1. Mechanism of formation of trans-vinylene, hydrogen and methane from a radiolyzed PP [33,34]. The balance between chain scission and crosslinking is given as [35]: Hence, with I being the dose rate, the scission and crosslinking yields can be evaluated using the chain scission and crosslinking concentrations by means of the following equations: Those values are in acceptable agreement with the data given in the literature [36,37] where, for isotactic PP, the G S values range from 0.24 × 10 −7 mol J −1 to 0.25 × 10 −7 mol J −1 and the G X values range from 0.16 × 10 −7 mol J −1 to 0.17 × 10 −7 mol J −1 . Unfortunately, for the other irradiation conditions, the data are too scarce for a reliable systematic assessment of the chemical yields obtained via chain scission and the crosslinking radiation. When PP was irradiated under homogeneous oxidative atmosphere, the main defects at the molecular scale were carbonyl-like bonds (ketones, carboxylic acids, and esters) and alcohol-like bonds (hydroperoxides and alcohols), as illustrated on Scheme 2 [38], together with the release of gases (hydrogen and methane, but also carbon monoxide and dioxide). Therefore, it was not surprising that the molar mass dropped faster for irradiation under air than under nitrogen. Those values are in acceptable agreement with the data given in the literature [36,37] where, for isotactic PP, the GS values range from 0.24 × 10 −7 mol J −1 to 0.25 × 10 −7 mol J −1 and the GX values range from 0.16 × 10 −7 mol J −1 to 0.17 × 10 −7 mol J −1 . Unfortunately, for the other irradiation conditions, the data are too scarce for a reliable systematic assessment of the chemical yields obtained via chain scission and the crosslinking radiation. When PP was irradiated under homogeneous oxidative atmosphere, the main defects at the molecular scale were carbonyl-like bonds (ketones, carboxylic acids, and esters) and alcohol-like bonds (hydroperoxides and alcohols), as illustrated on Scheme 2 [38], together with the release of gases (hydrogen and methane, but also carbon monoxide and dioxide). Therefore, it was not surprising that the molar mass dropped faster for irradiation under air than under nitrogen. Scheme 2. Mechanism of the formation of hydroperoxides and ketones from a radio-oxidized PP [38]. Characterization of the Volatiles Trapped in the Mask A quantification of the hydrogen and methane that evolved from the different layers of the FFP2 mask would have been interesting from a fundamental point of view, for the purpose of discussion in relation to the values of the radiochemical yields determined in the previous section. Here, we considered the evaluation of the trapped gases to be more important; these trapped gases could be identified by TD-GC-MS. To identify molecules trapped in the different layers of the mask, an overall characterization of the layers was undertaken via thermal desorption. Chromatograms are presented on Figure 4. In the four layers, a massif was observed roughly between 7 and 14 min and was attributed, using the NIST database, to linear and branched alkanes, which probably came from the synthesis process. It was additionally observed that the heavy alkanes were most lacking in layers 2 and 3, i.e., in the melt-blown layers compared to the spun-bonded ones (see the box between 12 and 13 min in Figure 4). Finally, a peak characteristic of the butylated hydroxybenzene was found at 16.2 min for the four Characterization of the Volatiles Trapped in the Mask A quantification of the hydrogen and methane that evolved from the different layers of the FFP2 mask would have been interesting from a fundamental point of view, for the purpose of discussion in relation to the values of the radiochemical yields determined in the previous section. Here, we considered the evaluation of the trapped gases to be more important; these trapped gases could be identified by TD-GC-MS. To identify molecules trapped in the different layers of the mask, an overall characterization of the layers was undertaken via thermal desorption. Chromatograms are presented on Figure 4. In the four layers, a massif was observed roughly between 7 and 14 min and was attributed, using the NIST database, to linear and branched alkanes, which probably came from the synthesis process. It was additionally observed that the heavy alkanes were most lacking in layers 2 and 3, i.e., in the melt-blown layers compared to the spun-bonded ones (see the box between 12 and 13 min in Figure 4). Finally, a peak characteristic of the butylated hydroxybenzene was found at 16.2 min for the four layers. This seemed to be a fragment of a stabilizer belonging to the hindered phenol family. layers. This seemed to be a fragment of a stabilizer belonging to the hindered phenol family. Figure 5 displays the chromatograms obtained using thermal desorption for the Valmy FFP2 medical mask for all four of the layers, before and after irradiation. It was not possible to identify all the molecules present in the masks before and after irradiation; for this reason, only the main peaks that were present are analyzed in this section. Characteristic peaks of fragments of a stabilizer of the hindered phenol fa identified: they are marked in Figure 5b with crosses, but they can also be foun TD-GC-MS chromatograms. For all of these molecules, which were release quantities, the sterilization conditions seemed to have no marked effect. In case of irradiations performed under vacuum (Figure 5a), no modifica Characteristic peaks of fragments of a stabilizer of the hindered phenol family were identified: they are marked in Figure 5b with crosses, but they can also be found in all the TD-GC-MS chromatograms. For all of these molecules, which were released in small quantities, the sterilization conditions seemed to have no marked effect. In case of irradiations performed under vacuum (Figure 5a), no modification of the chromatograms could be identified, indicating weak modification of the PP fibers under these irradiation conditions (even at 60 kGy). The polypropylene chain modifications were more obvious when irradiations were performed using γ-rays under air: the oxidation products are shown in Figure 5b (where they are marked with stars). Even under air, these degradation products were not observed under e-beam irradiation. This provides a clear indication that irradiations conducted under air, but performed at high dose rates of e-beam irradiation, are roughly equivalent to irradiations conducted under inert atmosphere. When the FFP2 mask was washed prior to sterilization by irradiation, there was no evident modification of the trapped molecules, indicating that this preliminary step did not degrade the PP fibers. This observation was not only valid when pure water was used, but also when Ultimate detergent was used (see Figure S8 of the Supplementary Materials). Conclusions In the context of the SARS-CoV2 pandemic, it appears that studies on the resistance of surgical and FFP2 masks to sterilization processes were of primary necessity. As sterilization using irradiation (γ-rays and e-beam irradiations) is commonly used in the medical field, this process was tested for the first time. For acceptance purposes, washing prior to the sterilization process was mandatory; therefore, this preliminary step was also assessed, at the molecular and macromolecular scales, using PP fibers. It was observed that washing prior to irradiation does not lead to modification of the polypropylene that constitutes the different layers of the Valmy FFP2 mask, be it using pure water or using a detergent (Ultimate detergent in our case). When irradiation was performed under vacuum, slight changes of the layers were observed. They mainly evidenced an increase in the crystallinity ratio and a decrease in the lengths of the chains upon irradiation. When using e-beams under oxidative atmosphere, the dose rate was found to be so important that modifications of the polymer were roughly equivalent to those under vacuum. The deepest modification of the different layers of the FFP2 mask was observed when irradiation was performed using γ-rays under atmospheric air. Under these conditions, the dose rate was sufficiently low that a homogeneous-or nearly homogeneous-radio-oxidation process could be observed, leading to the emission of oxidized gases, with part of them being trapped in the layers of the FFP2 masks. Additionally, under these conditions, crystallinity increased and the chain lengths decreased. The forthcoming conclusion should be ascertained by means of filtering experiments, but even if the modifications that were evidenced seem relatively weak, it is probable that they were sufficient to prevent the proper protection of the wearer after the sterilization process. Since FFP2 mask materials are electrostatically charged to confer the necessary filtration level, a washing step will undoubtedly withdraw these surface charges. In this case, it is not the materials' modification that will hinder the FFP2 masks' reuse, but their filtration efficiency. A study on the evolution of the filtering properties should be conducted to complement the molecular-and macromolecular-scale modifications, realized in this study, of the PP fibers that are constitutive of the layers of the masks. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/polym13234107/s1. Figure S1. Infrared spectra of the elastic strap of the Valmy FFP2 mask; Figure S2. Chromatogram obtained by TD-GC-MS of the molecules trapped in the elastic strap of the Valmy FFP2 medical mask; Figure S3. Infrared spectra of the elastic strap of the Valmy FFP2 mask before and after different sterilization protocols; Figure S4. TD-GC-MS chromatograms of the elastic strap of the Valmy FFP2 mask under different conditions; Figure S5. Infrared spectra of the Valmy FFP2 mask before and after different sterilization protocols; Figure S6. Infrared spectra of the Valmy FFP2 mask in ATR mode before and after different sterilization protocols; Figure S7. Infrared spectra of layer 1 and layer 3 of the Valmy FFP2 mask in ATR mode, before and after different sterilization protocols; Table S1. Crystallinities evolutions as a function of the layer under consideration and of the sterilization protocol process applied; Figure
6,515.6
2021-11-25T00:00:00.000
[ "Materials Science", "Medicine", "Engineering", "Environmental Science" ]
Perfect simulation of Vervaat perpetuities We use coupling into and from the past to sample perfectly in a simple and provably fast fashion from the Vervaat family of perpetuities. The family includes the Dickman distribution, which arises both in number theory and in the analysis of the Quickselect algorithm, which was the motivation for our work. Introduction, background, and motivation 1.1. Perpetuities in general. Define a perpetuity to be a random variable Y such that for some sequence W 1 , W 2 , W 3 , . . . of independent and identically distributed random variables distributed as W . Throughout this paper we assume W ≥ 0 (a.s.) and E W < 1 since these simplifying assumptions are met by the Vervaat perpetuities, which are discussed in Section 1.2 and are the focus of our work. [Some authors define a perpetuity as in (1.1) but with 1 added to the right-hand side.] The distribution of such a random variable Y is also referred to as a perpetuity. General background on perpetuities is provided in the first paragraph of Devroye [1,Section 1]. To avoid repetition, we refer the reader to that paper, which also cites literature about perpetuities in general and about approximate sampling algorithms. Within the general framework of (1.1), the following simple observations (with L = denoting equality in law, or distribution) can be made: (i) The random variable Y ≥ 0 is finite almost surely; indeed, its expected value is finite: (ii) The perpetuity satisfies the distributional fixed-point equation where, on the right, W and Y are independent. [In fact, this fixedpoint equation characterizes the distribution (1.1).] (iii) If W has a density f W , then Y has a density f Y satisfying the integral equation 1.2. Quickselect, the Dickman distribution, and the Vervaat family of perpetuities. Our interest in perpetuities originated with study of the running time of the Quickselect algorithm (also known as Find), due to Hoare [4]. Quickselect(n, m) is a recursive algorithm to find the item of rank m ≥ 1 (say, from the bottom) in a list of n ≥ m distinct numbers (called "keys"). First, a "pivot" is chosen uniformly at random from among the n keys and every key is compared to it, thereby determining its rank (say, j) and separating the other keys into two groups. If j = m, then Quickselect(n, m) returns the pivot. If j > m, then Quickselect(j − 1, m) is applied to the keys smaller than the pivot. If j < m, then Quickselect(n − j, m − j) is applied to the keys larger than the pivot. Let C(n, m) denote the (random) number of key comparisons required by the call Quickselect(n, m), and write 1(A) to mean the indicator that has value 1 if the Boolean expression A is true and 0 otherwise. Let J n denote the rank of the first pivot chosen. Immediately from the description of the algorithm we find the distributional recurrence relation 3) where, on the right, (i) for each fixed r and s the random variable C(r, s) is distributed as the number of key comparisons required by Quickselect(r, s), the joint distribution of such random variables being irrelevant; (ii) similarly for C * (r, s); (iii) the collection of random variables C(r, s) is independent of the collection of C * (r, s); and (iv) J n is uniformly distributed on {1, . . . , n} and is independent of all the C's and C * 's. The distribution of C(n, m) is not known in closed form for general finite n and m, so we turn to asymptotics. For any fixed m, formal passage to the limit as n → ∞ suggests that Y (n, m) := C(n, m) n − 1 has a limiting distribution L(Y ) satisfying the fixed-point equation This is indeed the case, as was shown by Mahmoud et al. [12]. Recalling the characterization (1.2), we see that the law of Y is a perpetuity, known as the Dickman distribution. Curiously, if m is chosen uniformly at random from {1, . . . , n} before the algorithm is run, then the limiting distribution of Y (n, m) is the convolution square of the Dickman distribution, as was also shown in [12]. See [6, Section 2] concerning various important settings, including number theory (largest prime factors) and combinatorics (longest cycles in permutations), in which the Dickman distribution arises; also see [6] for some basic facts about this distribution. The support of the distribution is [0, ∞), and simple integral expressions are known for the characteristic and momentgenerating functions of this log-concave (that is, strongly unimodal) and infinitely divisible distribution. In addition, the Dickman distribution has a continuous density f that is constant with value e −γ over (0, 1] (here γ is Euler's constant). Over each interval (k, k + 1] with k a positive integer, f is the unique solution to the delayed differential equation Still, no closed form for f is known. Some years back, Luc Devroye challenged the first author to find a method for simulating perfectly from the Dickman distribution in finite time, where one assumes that only perfect draws from the uniform distribution and basic arithmetic operations such as multiplication (with perfect precision) are possible; in particular, numerical integration is not allowed. The problem was solved even prior to the writing of Devroye's paper [1], but the second author pointed out extensive simplification that could be achieved. The result is the present collaborative effort, which shows how to sample perfectly in a simple, provably fast fashion from the Dickman distribution, and more generally from any member of the Vervaat [15] family of perpetuities handled by Devroye [1]. To obtain the Vervaat perpetuities, one for each value of 0 < β < ∞, choose W = U 1/β in (1.2). A quick review of Devroye's method. To compare with the Markovchain-based method employed in the present paper, we first review highlights of Devroye's [1] method for perfect simulation of perpetuities. Devroye sketches a general approach and carries it out successfully for the Vervaat family. The underlying idea of Devroye's approach is simple acceptance-rejection: 1. Find explicit h > 0 and 1 ≤ c < ∞ so that f Y ≤ h everywhere and h = c. 2. Generate X with density c −1 h. 3. Accept X with probability f Y (X)/h(X); otherwise, reject X and independently repeat steps 2-3. 2. Let U 1 and U 2 be independent uniform(0, 1) random variables and set X : 3. Since f Y can't be computed exactly, having generated X = x and an independent uniform random variable U 3 , one must figure out how to determine whether or not U 3 ≤ f Y (x)/h(x). Devroye's solution for step 3 is to find explicitly computable approximations f n (x) to f Y (x) and explicitly computable bounds R n (x) so that, for every x, Devroye's functions f n are quite complicated, involving (among other things) • the sine integral function S(t) := t 0 sin s s ds and approximations to it computable in finite time; • explicit computation of the characteristic function φ Y of Y ; • use of quadrature (trapezoidal rule, Simpson's rule) to approximate the density f Y as the inverse Fourier transform of φ Y . Devroye proves that the running time of his algorithm is finite almost surely (for any β > 0), but cannot get finite expected running time for any β > 0 without somewhat sophisticated improvements to his basic algorithm. He ultimately develops an algorithm so that More careful analysis would be difficult. Devroye makes no claim "that these methods are inherently practical". We do not mean to criticize; after all, as Devroye points out, his paper [1] is useful in demonstrating that perfect simulation of perpetuities (in finite time) is possible. The approach we will take is very simple conceptually, very easy to code, and (at least for Vervaat pereptuities with β not too large) very fast (provably so). While we have hopes that our methodology can be generalized to apply to any perpetuity, in this paper we develop the details for Vervaat perpetuities for any β > 0. 1.4. Our approach. Briefly, our approach to perfect simulation of a perpetuity from the Vervaat family is to use the Markov-chain-based perfect sampling algorithm known as coupling into and from the past (CIAFTP) ( [8,9]) that produces draws exactly from the stationary distribution of a Markov chain. CIAFTP requires use of a so-called dominating chain, which for us will be simple random walk with negative drift on a set of the form where x 0 is a fixed real number at least 2. In order to handle the continuous state space, multigamma coupling [13] will be employed. As we shall see, all of the Markov chain steps involved are very easy to simulate, and the expected number of steps can be explicitly bounded. For example, our bound is the modest constant 15 in the case β = 1 of the Dickman distribution, for which the actual expected number of steps appears to be a little larger than 6 (consult the end of Section 5). The basic idea is simple. From the discussion in Section 1.1 it is clear that the kernel K given by provides a Markov chain which, for any initial distribution, converges in law to the desired stationary distribution L(Y ), the perpetuity. An update function or transition rule for a Markov chain on state space Ω with kernel K is a function φ : Ω × Ω ′ → Ω (for some space Ω ′ ) so that for a random variable W with a specified distribution on Ω ′ we have L(φ(x, W )) = K(x, ·) for every x ∈ Ω. There are of course many different choices of update function for any particular Markov chain. To employ coupling from the past (CFTP) for a Markov chain, an update function that is monotone suffices. Here, an update function φ is said to be monotone if whenever x y with respect to a give partial order on Ω we have φ(x, w) φ(y, w) for all w ∈ Ω ′ . Consider the state space [0, ∞) linearly ordered by ≤, and let W be distributed as in (1.2). Then provides a natural monotone update function. If we wish to do perfect simulation, it would appear at first that we are ideally situated to employ coupling from the past (CFTP). However, there are two major problems: (i) The rule φ natural is strictly monotone, and thus no two trajectories begun at distinct states will ever coalesce. (ii) The state space has no top element. It is well known how to overcome these difficulties: (i) Instead of φ natural , we use a multigamma coupler [13]. (ii) Instead of CFTP, we use CIAFTP with a dominating chain which provides a sort of "stochastic process top" to the state space [8,9]. Our work presents a multigamma coupler for perpetuities that is monotone. Monotonicity greatly simplifies the use of CFTP [14] and CIAFTP. Useful monotone couplers can be difficult to find for interesting problems on continuous state spaces; two very different examples of the successful use of monotone couplers on such spaces can be found in [16] and [5]. 1.5. Outline. In Section 2 we describe our multigamma coupler; in Section 3, our dominating chain. Section 4 puts everything together and gives a complete description of the algorithm, and Section 5 is devoted to bounding the running time. Section 6 briefly discusses approaches similar to ours carried out by two other pairs of authors. The multigamma coupler The multigamma coupler of Murdoch and Green [13] is an extension of the γ coupler described in Lindvall [11] that couples a single pair of random variables. An update function can be thought of as coupling an uncountable number of random variables simultaneously, thus the need for "multi"gamma coupling. The goal of multigamma coupling is to create an update function whose range is but a single element with positive probability, in order to use CIAFTP as described in Section 4. A dominating chain Dominating chains were introduced by Kendall [8] (and extended in [9]) to extend the use of CFTP to chains on a partially ordered state space with a bottom element but no top element. A dominating chain for a Markov chain (X t ) is another Markov chain (D t ) that can be coupled with (X t ) so that X t ≤ D t for all t. In this section we give such a dominating chain for the Vervaat perpetuity, and in the next section we describe how the dominated chain can be used with CIAFTP. We exhibit such a rule ψ in our next result, Proposition 3.1. It is immediate from the definition of ψ that the dominating chain is just a simple random walk on the integers {x 0 −1, x 0 , . . . } that moves left with probability 2/3 and right with probability 1/3; the walk holds at x 0 − 1 when a move to x 0 − 2 is proposed. In the definition of ψ, note that no use is made of w(2). Proposition 3.1. Fix 0 < β < ∞. Let φ be the multigamma coupler for Vervaat perpetuities described in Proposition 2.1. Define and, for x ∈ S := {x 0 − 1, x 0 , . . . }, Then (3.1) holds, and so ψ drives a chain that dominates the φ-chain. A perfect sampling algorithm for Vervaat perpetuities For an update function φ(x, w) and random variables W −t , . . . , W −1 , set To use coupling from the past with a dominated chain for perfect simulation, we require: (i) A bottom element 0 for the partially ordered state space of the underlying chain X, a dominating chain D, and update functions φ(x, w) and ψ(x, w) for simulating the underlying chain and dominating chain forward in time. (This detection of coalescence can be conservative, but we must never claim to detect coalescence when none occurs.) With these pieces in place, dominated coupling from the past [9] is: , then output state x as the random variate and quit. 6. Else let t ← t ′ , t ′ ← t ′ + 1, and go to step 3. The update to t ′ in step 6 can be done in several ways. For instance, doubling time rather than incrementing it additively is often done. For our chain step 5 only requires checking to see whether W −t ′ ≤ 1/(1 + D −t ′ ), and so requires constant time for failure and time t ′ for success. Thus step 6 as written is efficient. Because the dominating chain starts at time 0 and is simulated into the past, and then the underlying chain is (conditionally) simulated forward, Wilson [16] referred to this algorithm as coupling into and from the past (CIAFTP). Now we develop each of the requirements (i)-(v) for Vervaat perpetuities. Requirement (i) was dealt with in Section 2, where it was noted that the dominating chain is a simple random walk with probability 2/3 of moving down one unit towards 0 and probability 1/3 of moving up one unit. The chain has a partially absorbing barrier at x 0 − 1, so that if the chain is at state x 0 − 1 and tries to move down it stays in the same position. The stationary distribution of the dominating random walk can be explicitly calculated and is a shifted geometric random distribution; requirement (ii) can be satisfied by letting D 0 be x 0 −2+G where G ∈ {1, 2, . . . } has the Geometric distribution with success probability 1/2. [Recall that we can generate G from a Unif(0, 1) random variable U using G = ⌈−(ln U )/(ln 2)⌉.] Simulating the D-chain backwards in time [requirement (iii)] is also easy since D is a birth-and-death chain and so is reversible. Now consider requirement (iv). Suppose that a one-step transition D −t+1 = D −t + 1 (say) forward in time is observed, and consider the conditional distribution of the driving variable W −t that would produce this [via . What we observe is precisely that the random variable W −t (1) = U 1/β −t fed to the update function ψ must have satisfied W −t (1) > (2/3) 1/β , i.e., U −t > 2/3. Hence we simply generate W −t (1) from the distribution of U 1/β −t conditionally given conditioned on W −t (1) ≤ (2/3) 1/β , i.e., on U −t ≤ 2/3. The random variable W −t (2) is always generated independently as the 1/β power of a Unif(0, 1) random variable. Finally, requirement (v) is achieved by using the multigamma coupler from the last section; indeed, if ever W −t (1) ≤ 1/(D −t + 1), then the underlying chain coalesces to a single state at time −t + 1, and hence also at time 0. A bound on the running time and hence E T = e β ln β+Θ(β) as β → ∞; and Proof. The lower bound in (5.1) is easy: Since D t ≥ x 0 − 1, the expectation of the number of steps t needed to get (U −t ) 1/β ≤ 1/(D −t + 1) is at least x β 0 . We proceed to derive the upper bound. Consider a potential function Φ = (Φ t ) t≥0 such that Φ t depends on D −t , . . . , D 0 and W −t , . . . , W −1 in the following way: We will show that the process (Φ t∧T + (1/3)(t ∧ T )) is a supermartingale and then apply the optional sampling theorem to derive the upper bound. Let F t be the σ-algebra generated by D −t , . . . , D 0 and W −t , . . . , W −1 , so that T is a stopping time with respect to the filtration (F t ). Suppose that T > t. In this case there are two components to Φ t − Φ t−1 . The first component comes from the change from D −t to D −t+1 . The second component comes from the possibility of coalescence (which gives T = t) due to the choice of (1). The expected change in Φ is the sum of the expected changes from these two sources. For the change in the D-chain we observe The expected change in Φ due to coalescence can be bounded above by considering coalescence only when D −t+1 = x 0 − 1, in which case we have D −t ∈ {x 0 − 1, x 0 }. But then coalescence occurs with probability at least [1/(1 + x 0 )] β , and if coalescence occurs, Φ drops from (2/3)(x 0 + 1) β down to 0. Therefore the expected change in Φ from coalescence when . So whenever T > t, the potential Φ decreases by at least 1/3 on average at each step, and hence (Φ t∧T + (1/3)(t ∧ T )) is a supermartingale. Since it is also nonnegative the optional sampling theorem can be applied (see for example [3], p. 271) to yield is a geometric random variable with parameter 1/2 and thus has mean 2. Hence E Φ 0 = 1 + (2/3)(x 0 + 1) β , giving the desired upper bound on E T . We now prove (5.2), using notation such as T (β) to indicate explicitly the dependence of various quantities on the value of the parameter β. However, notice that x 0 (β) = 2 for all 0 < β ≤ β 0 := ln(3/2)/ ln 3 and hence that the same dominating chain D may be used for all such values of β. Conditionally given the entire D-process, since 1 − (D −s + 1) −β is the probability that coalescence does not occur at the sth step backwards in time, Y t (β) equals the probability that coalescence does not occur on any of the first t steps backwards in time. Thus and hence We now need only apply the dominated convergence theorem, observing that Y t (β) ≥ 0 is increasing in 0 < β ≤ β 0 and that 1 For any fixed value of β, it is possible to find E T to any desired precision. Let us sketch how this is done for the Dickman distribution, where β = 1. First, we note for simplicity that T in Theorem 5.1 has the same distribution as the time it takes for the dominating random walk D = (D t ) t≥0 , run forward from the stationary distribution at time 0, to stop, where "stopping" is determined as follows. Having generated the walk D through time t and not stopped yet, let U t be an independent uniform(0, 1) random variable. If U t > 2/3, let D move up (and stopping is not possible); if U t ≤ 2/3, let D move "down" and stop (at time t + 1) if U t ≤ 1/(D t + 1). Adjoin to the dominating walk's state space {x 0 − 1, x 0 , x 0 + 1, . . .} an absorbing state corresponding to stopping. Then the question becomes: What is the expected time to absorption? We need only compute the expected time to absorption from each possible deterministic initial state and then average with respect to the shifted-geometric stationary distribution for D. For finitestate chains, such calculations are straightforward (see for example [7], pp. 24-25). For infinite-state absorbing chains such as the one we have created here, some truncation is necessary to achieve lower and upper bounds. This can be done using ideas similar to those used in the proof of Theorem 5.1; we omit further details. The upshot is that is takes an average of 6.07912690331468130722 . . . steps to reach coalescence. This is much closer to the lower bound given by Theorem 5.1 of 5 than to the upper bound of 15 . Our exact calculations confirm results we obtained from simulations. Ten million trials (which took only a few minutes to run and tabulate using Mathematica code that was effortless to write) gave an estimate of 6.07787 with a standard error of 0.00184. The algorithm took only a single Markov chain step about 17.4% of the time; more than four steps about 47.6% of the time; more than eight steps about 23.4% of the time; and more than twenty-seven steps about 1.0% of the time. In the ten million trials, the largest number of steps needed was 112. Similar simulations for small values of β led us to conjecture, for some constant c near 1, the refinement E T (β) = 1 + (1 + o(1)) c β as β → 0 (5.7) of (5.2). The expansion (5.7) (which, while not surprising, demonstrates extremely efficient perfect simulation of Vervaat perpetuities when β is small) does in fact hold with 2 −i ln(i + 1) ≈ 1.016. (5.8) We will prove next that the right side of (5.7) provides a lower bound on E T (β). Our proof that it also provides an upper bound uses the same absorbing-state approach we used to compute E T (β) numerically in the case β = 1 and is rather technical, so we omit it here. Using (5.6) and (5.5) together with the inequality e −x ≥ 1 − x for x ≥ 0 we find Application of the dominated convergence theorem to the right side of (5.9) gives the lower bound in (5.7), where c = E ln(D −1 + 1) is given by the series (5.8). Related work An unpublished early draft of this paper has been in existence for a number of years. Perfect simulation of Vervaat perpetuities has been treated independently by Kendall and Thönnes [10]; their approach is quite similar (but not identical) to ours. One main difference is their use of the multi-shift coupler of Wilson [16] rather than the multigamma coupler. The simulation with (in our notation) β = 10 reported in their Section 3.5 suggests that, unlike ours, their algorithm is reasonably fast even when β is large; it would be very interesting to have a bound on the expected running time of their algorithm to that effect. The early draft of this paper considered only the Dickman distribution and used an integer-valued dominating chain D with a stationary distribution that is Poisson shifted to the right by one unit. Luc Devroye, who attended a lecture on the topic by the first author of this paper in 2004, and his student Omar Fawzi have very recently improved this approach to such an extent that the expected number of Markov chain steps they require to simulate from the Dickman distribution is proven to be less than 2.32; see [2]. It is not entirely clear that their algorithm is actually faster than ours, despite the larger average number 6.08 of steps for ours (recall the penultimate paragraph of Section 5), since it is possible that one step backwards in time using their equation (1) takes considerably longer than our simple up/down choice (recall step 2 in the algorithm at the end of Section 4).
5,966.8
2009-08-12T00:00:00.000
[ "Mathematics" ]
Effect of Alkali Metal Atoms Doping on Structural and Nonlinear Optical Properties of the Gold-Germanium Bimetallic Clusters A new series of alkali-based complexes, AM@GenAu (AM = Li, Na, and K), have been theoretically designed and investigated by means of the density functional theory calculations. The geometric structures and electronic properties of the species are systematically analyzed. The adsorption of alkali metals maintains the structural framework of the gold-germanium bimetallic clusters, and the alkali metals prefer energetically to be attached on clusters’ surfaces or edges. The high chemical stability of Li@Ge12Au is revealed by the spherical aromaticity, the hybridization between the Ge atoms and Au-4d states, and delocalized multi-center bonds, as well as large binding energies. The static first hyperpolarizability (βtot) is related to the cluster size and geometric structure, and the AM@GenAu (AM = Na and K) clusters exhibit the much larger βtot values up to 13050 a.u., which are considerable to establish their strong nonlinear optical (NLO) behaviors. We hope that this study will promote further application of alkali metals-adsorbed germanium-based semiconductor materials, serving for the design of remarkable and tunable NLO materials. Introduction Alkalis metals have attracted much attention because they serve as building blocks for novel materials with tunable properties [1], and a great deal of new alkali-based complexes have been recently reported [2][3][4]. In particular, a series of alkalides, proposed as the candidates of nonlinear optical (NLO) materials, exhibit exceptionally large NLO responses and are promising for their spectacular semiconductor potentials in photoelectricity devices and optical communications, e.g., M@C 60 (M = Li, Na, Cs) [5], Li@C 60 -BX 4 (X = F, Cl, Br) [6], Li n F (n = 2-5) [7], Li 2 @BN nanotubes [8], OLi 3 -M-Li 3 O (M = Li, Na, K) [9], Li 3 + (calyx [4]pyrrole)M − [10], Li(NH 3 ) 4 M (M = Li, Na, K) [11], and Li(CH 3 NH 2 ) n Na [12], etc., in which both theoretical and experimental studies play a crucial role in finding ways to enhance the NLO behaviors. On the other hand, the different NLO materials can be expressed by the largely high second-order NLO response at the molecular level [13,14], and the first hyperpolarizability can be quantitatively used to evaluate the potential NLO materials of alkali-based complexes. Meanwhile, owing to the unique electronic properties of alkalides, their derivatives have been extensively reported with the extension to silicon-based clusters, e.g., Si 10 (Li, Na, K) n (n = 1, 2) [15], (Li, Na, K)@Si n Nb (n = 1-12) [16], Si 10 Li 8 [17], and Si n Fe (n = 1-14) [18], which ascribe to loosely bound electrons in alkali-based complexes, resulting in the large NLO responses. It is also noteworthy that although the analogous germanium and silicon species are isovalent, their structures and physicochemical properties are quite different [19][20][21][22][23][24][25]. For example, the static hyperpolarizabilities of pure Si m Ge n (m + n = 7, n = 0-7) clusters were studied by using the density functional theory (DFT) and MP2 ab initio methods [26], and revealed that the enhancement of the hyperpolarizabilities arises from more polarizable character on the germanium atoms, and thus alkali metals-adsorbed germanium-based semiconductor clusters may have the unusual features in high-performance optoelectronic devices for the potential applications. In addition, Knoppe and Ozga et al. [27][28][29] found that the gold-containing clusters appear to have strong NLO responses, and the Au atom plays an important role in the optoelectronic application. Obviously, the knowledge of geometries, electronic structures, chemical bonding, and nonlinear optical properties of the species is very important for understanding these applications, but these are rarely reported. With this motivation, we explored the stability and electronic structures of alkali metals-adsorbed Ge n Au semiconductor clusters for the first time, labeled as AM@Ge n Au (AM = Li, Na, and K; n = 2-13), and investigated the effect of alkali metals on the dipole moment, polarizability, and first hyperpolarizability (β tot ). The results suggest that the AM@Ge n Au (AM = Na and K) clusters may be proposed as novel potential high-performance NLO materials, especially for the Na@Ge 7 Au cluster with a large β tot value. Computational Details The geometrical optimizations of the AM@Ge n Au (AM = Li, Na, and K; n = 2-13) clusters were carried out by using the hybrid DFT-B3LYP functional [30,31], implemented in the Gaussian 09 suite of programs [32]. This method provides a good prediction on energy evaluation [33][34][35], in conjunction with the Karlsruhe split-valence basis set augmented with polarization functions (def-SVP) for all alkali metal atoms and the double-ζ LanL2DZ [36][37][38] with effective core potentials (ECPs) for the Ge and Au atoms, and then the low-lying isomers are further reoptimized at the B3LYP/def-TZVP level of theory. In the calculations, the singlet and triplet spin states were examined for each initial structure, and the zero-point vibrational correction was included into the relative energies. According to the previous works on the neutral [24] and cationic [39] Ge n Au clusters, we searched a large number of initial isomers for the alkali metals-adsorbed gold-doped germanium clusters from the following steps. One is that the alkali metal atoms are attached to different germanium positions on the surface, edge or apex of the lowest-energy structures of the Ge n Au clusters. The second is the attaching structure with gold positions adsorbed by alkali metals. The remaining structures were constructed by us. Vibrational frequency computations were conducted to ensure that these low-lying structures are local minima on its potential energy surface. The energy of a molecular system in the presence of a homogeneous electric field can be written as the following equation [40][41][42]: Here, E 0 is the total energy of the molecule without electric field present, F i is the electric field component in the α direction; the µ i , α ij , β ijk terms are the dipole, the polarizability, and the first hyperpolarizability, respectively. The static dipole moment (µ 0 ) and polarizability (α iso ) are defined as follows: The static first hyperpolarizability (β tot ) is obtained as follows: where The density-of-states (DOS) spectra were convoluted utilizing the GaussSum 2.2 program [43] with the full-width at half maximum (FWHM) of 0.3 eV. Chemical bonding analyses were performed using the adaptive natural density partitioning (AdNDP) method proposed by Zubarev and Boldyrev [44]. The molecular orbitals were plotted with the isodensity surfaces (0.02 e 1/2 /(bohr) 3/2 ), and the molecular graphs were visualized using the VMD program [45]. The long-range corrected functional CAM-B3LYP [46] was utilized to calculate the excited energies within the framework of the time-dependent density functional theory (TDDFT) [47], as well as for the evaluation of linear and nonlinear (L&NLO) optical properties, e.g., dipole moments (µ 0 ), isotropic polarizability (α iso ), and first hyperpolarizability (β tot ) of the AM@Ge n Au clusters. Geometric Structures The low-lying structures of the AM@Ge n Au (n = 2-13) clusters are displayed in Figure 1. The relative energies of these low-lying isomers, obtained by using the two basis sets as mentioned above, are listed in Table S1 of Supplementary Materials, in which all the calculations indicate that the global minimum for each cluster is in singlet spin states. Meanwhile, one can see from Figure 1 that adsorption of the AM atoms does not largely change the structural framework of the Au-doped germanium clusters, as reported previously by Li et al. [24], and the alkali metals prefer to be attached on clusters' surface or edge with the multi-adsorbed bonds (multi-bonds) rather than the AM-Ge or AM-Au single-adsorbed bond in the stable structures. Boldyrev [44]. The molecular orbitals were plotted with the isodensity surfaces (0.02 e 1/2 /(bohr) 3/2 ), and the molecular graphs were visualized using the VMD program [45]. The long-range corrected functional CAM-B3LYP [46] was utilized to calculate the excited energies within the framework of the time-dependent density functional theory (TDDFT) [47], as well as for the evaluation of linear and nonlinear (L&NLO) optical properties, e.g., dipole moments (μ0), isotropic polarizability (αiso), and first hyperpolarizability (βtot) of the AM@GenAu clusters. Geometric Structures The low-lying structures of the AM@GenAu (n = 2-13) clusters are displayed in Figure 1. The relative energies of these low-lying isomers, obtained by using the two basis sets as mentioned above, are listed in Table S1 of Supplementary Materials, in which all the calculations indicate that the global minimum for each cluster is in singlet spin states. Meanwhile, one can see from Figure 1 that adsorption of the AM atoms does not largely change the structural framework of the Au-doped germanium clusters, as reported previously by Li et al. [24], and the alkali metals prefer to be attached on clusters' surface or edge with the multi-adsorbed bonds (multi-bonds) rather than the AM-Ge or AM-Au single-adsorbed bond in the stable structures. (Li, Na, K)@Ge2Au. The lowest-energy structure (2A) of AM@Ge2Au (AM = Li, Na, and K) takes the tetrahedron structure with Cs symmetry, generated from the AM atoms being capped on the clusters' surface of the triangular Ge-Ge-Au base. The corresponding triplet states are less stable in energy than those with singlet spin states by 1.03- 1.43 eV high at the B3LYP/def-TZVP level of theory (see Table S1 of Supplementary Materials). According to our calculations, the equilibrium Ge-Ge bond lengths are predicted to be 2.399-2.601 Å in the AM@Ge2Au clusters. (Li, Na, K)@Ge3Au. 3A is a planar structure with Cs symmetry, which can be regarded as the alkali metals being bonded on the edge (Au-Ge) of the rhombic Ge3Au cluster, whereas the structure adsorbed on another edge (Ge-Ge) by alkali metals can be found to be unstable. The 3A geometry can be considered as the global minimum of the AM@Ge3Au (AM = Li, Na, and K) cluster. In (Li, Na, K)@Ge 2 Au. The lowest-energy structure (2A) of AM@Ge 2 Au (AM = Li, Na, and K) takes the tetrahedron structure with C s symmetry, generated from the AM atoms being capped on the clusters' surface of the triangular Ge-Ge-Au base. The corresponding triplet states are less stable in energy than those with singlet spin states by 1.03- 1.43 eV high at the B3LYP/def-TZVP level of theory (see Table S1 of Supplementary Materials). According to our calculations, the equilibrium Ge-Ge bond lengths are predicted to be 2.399-2.601 Å in the AM@Ge 2 Au clusters. (Li, Na, K)@Ge 3 Au. 3A is a planar structure with C s symmetry, which can be regarded as the alkali metals being bonded on the edge (Au-Ge) of the rhombic Ge 3 Au cluster, whereas the structure adsorbed on another edge (Ge-Ge) by alkali metals can be found to be unstable. The 3A geometry can be considered as the global minimum of the AM@Ge 3 Au (AM = Li, Na, and K) cluster. In Li@Ge 3 Au, the equilibrium bond lengths are evaluated to be 2.597 Å for the Li-Au bond, 2.515 Å for the Li-Ge bond, 2.558-2.722 Å for two Au-Ge bonds, and 2.353-2.876 Å for three Ge-Ge bonds. (Li, Na, K)@Ge 4 Au. As previously reported [24], the ground-state structure of the Ge 4 Au cluster is a pyramid-distorted isomer. When the AM atoms are attached on the side face (Ge-Ge-Au) of Ge 4 Au, the most stable 4A structure is formed, in which the equilibrium Li-Ge and Li-Au bond lengths are predicted to be 2.597, 2.615, and 2.771 Å at the B3LYP/def-TZVP level of theory, respectively. In the search for the low-lying configurations of AM@Ge 5 Au (AM = Li, Na, and K), there are two stable structures (5A and 5B) with close energies (0.13-0.16 eV in singlet spin states) to be obtained while their triplet spin states have the higher relative energies (at least 0.78 eV) at the B3LYP/def-TZVP level of theory. The geometric differences between the two isomers are that the AM atoms are adsorbed on the side face (Ge-Ge-Ge-Au) or side edge (Ge-Ge) of the distorted triangular prism. Obviously, the global minimum of AM@Ge 5 Au (AM = Li, Na, and K) is found to be the 5A structure. It is noteworthily that for small clusters with n ≤ 5, the lowest-energy structures take the AM-adsorbed on Au-adjacent edge or surface of cluster base, to form the stable AM-Au chemical bonds. (Li, Na, K)@Ge 6 Au. Similar to the n = 5 cluster as discussed above, we have manually designed a great number of initial geometries for the n = 6 cluster size, e.g., adsorbing the AM atoms to different position sites and substituting the apical Ge atom by the AM atoms in the Ge 7 Au cluster, and so on, but the 6A cagelike with C s symmetry is the lowest-energy structure for the AM@Ge 6 Au (AM = Li, Na, and K). The 6B isomer is a distorted quadrangular prism, which is less stable than 6A by at least 0.13 eV higher in energy. Meanwhile, it is found that the two different basis sets used herein provide the same energetic ordering for these small clusters (see Table S1 of Supplementary Materials). (Li, Na, K)@Ge 7 Au. As mentioned above, the AM atoms prefer energetically to be adsorbed on the Au-adjacent surface (Ge-Ge-Ge-Au) or edge (Ge-Au) of the quadrangular prism (QP), thus the two stable structures (7A and 7B) can be obtained. According to our calculation, the 7A isomer is predicted as a putative global minimum for the (Li, K)@Ge 7 Au clusters, whereas the 7B isomer is the most stable structure for Na@Ge 7 Au. (Li, Na, K)@Ge 8 Au. The 8A and 8B isomers are one bicapped square prism, which can be viewed as the alkali metal and Au atom capped on different faces of the lowest-energy Ge 7 Au cluster (called as square prism). Of which 8A is the lowest-energy structure for the Li@Ge 8 Au cluster at the B3LYP/LanL2DZ(Ge,Au)/def-SVP(AM) level of theory, while the most stable structure for (Na, K)@Ge 8 Au is the 8B isomer lying only 0.02-0.04 eV under the 8A isomer. However, the energetic ordering of 8A and 8B for (Na, K)@Ge 8 Au can be reversed at the B3LYP/Def-TZVP level of theory. Thus, the 8A geometry can be found to be the global minimum for the AM@Ge 8 Au (AM = Li, Na, and K) cluster at the B3LYP/Def-TZVP level of theory. (Li, Na, K)@Ge 9 Au. Based on the lowest-energy Ge 9 Au cluster [24], the alkali metal-adsorbed complexes can be generated from the AM atoms attaching on the above Au-Ge layer. It is obvious that the Au-adjacent isomer (9A) is more stable than the Au-outlying one (9B) by~0.04-0.06 eV in singlet spin states, being consistent with the small clusters (n ≤ 5). (Li, Na, K)@Ge 10 Au. The Ge 10 Au cluster is a C 2v -symmetrical pentagonal prism with the Au-encapsulated into the central position of structure [24]. When the alkali metal atoms are directly face-capped on side quadrangle of Ge 10 Au, a new equilibrium 10A structure, considered as global minimum, is yielded using the DFT-B3LYP functional. Of which the theoretical bond lengths are predicted to be 2.769 Å for the four equivalent Li−Ge bonds, and 2.715, 2.733, and 2.742 Å for three kinds of the equivalent Ge-Au bonds. (Li, Na, K)@Ge 11 Au. The 11A structure can be regarded as the Ge atom being directly capped on the side edge of the above pentagon of 10A or alkali metal atoms being directly adsorbed on the face-sided pentagon of the lowest-energy Ge 11 Au cluster, as published previously [24]. Similarly, the low-lying 11B isomer can be formed by capping the alkali metal atoms on the face-sided quadrangle of the Ge 11 Au cluster, which is less stable than 11A by at least 0.07 eV high in energy (see Table S1 of Supplementary Materials ). (Li, Na, K)@Ge 12 Au. Similar to the n = 11 cluster as discussed above, the 12A and 12B structures can be generated from the different position sites adsorbed by alkali metal atoms on the lowest-energy Ge 12 Au cluster. Interestingly, the 12B isomer has a highly coordinated surface adsorbed by the Li atom to eliminate the number of dangling bonds on the clusters' surface, regarded as the global minimum of Li@Ge 12 Au. Contrastingly, the adsorption of Na/K in lower coordinated edge position (12A) will form the lowest-energy structure for (Na, K)@Ge 12 Au cluster. (Li, Na, K)@Ge 13 Au. Referring to the equilibrium geometries of the low-lying Ge 13 Au cluster reported previously [24], a great number of initial structures of alkali metal-adsorbed complexes were extensively constructed and optimized in order to locate the global minimum of the cluster size. For comparison, we only show two first stable 13A and 13B isomers for (Li, Na, K)@Ge 13 Au (see Figure 1), which shows the close energy difference between 0.03 eV and 0.22 eV at the B3LYP/Def-TZVP level of theory (see Table S1 of Supplementary Materials). Obviously, the lowest-energy 13A structure undergoes the structural relaxation, whereas the low-lying 13B structure keeps the structural framework of the lowest-energy Ge 13 Au cluster. Chemical Stability and Electronic Structures In order to explore the size selectivity and the electronic properties of the AM@Ge n Au (n = 2-13) clusters, we plotted the average binding energy (E b ), the dissociation energy (D e , AM@Ge n Au → Ge n Au + AM), the HOMO-LUMO gaps (GAPs), and the vertical ionization potentials (VIP) and vertical electron affinities (VEA) in Figures 2 and 3, respectively, which are defined by: where the E term represents the total energy with zero-point vibrational corrections. Nanomaterials 2017, 7, 184 5 of 15 (Li, Na, K)@Ge12Au. Similar to the n = 11 cluster as discussed above, the 12A and 12B structures can be generated from the different position sites adsorbed by alkali metal atoms on the lowestenergy Ge12Au cluster. Interestingly, the 12B isomer has a highly coordinated surface adsorbed by the Li atom to eliminate the number of dangling bonds on the clusters' surface, regarded as the global minimum of Li@Ge12Au. Contrastingly, the adsorption of Na/K in lower coordinated edge position (12A) will form the lowest-energy structure for (Na, K)@Ge12Au cluster. (Li, Na, K)@Ge13Au. Referring to the equilibrium geometries of the low-lying Ge13Au cluster reported previously [24], a great number of initial structures of alkali metal-adsorbed complexes were extensively constructed and optimized in order to locate the global minimum of the cluster size. For comparison, we only show two first stable 13A and 13B isomers for (Li, Na, K)@Ge13Au (see Figure 1), which shows the close energy difference between 0.03 eV and 0.22 eV at the B3LYP/Def-TZVP level of theory (see Table S1 of Supplementary Materials). Obviously, the lowest-energy 13A structure undergoes the structural relaxation, whereas the low-lying 13B structure keeps the structural framework of the lowest-energy Ge13Au cluster. Chemical Stability and Electronic Structures In order to explore the size selectivity and the electronic properties of the AM@GenAu (n = 2-13) clusters, we plotted the average binding energy (Eb), the dissociation energy (De, AM@GenAu → GenAu + AM), the HOMO-LUMO gaps (GAPs), and the vertical ionization potentials (VIP) and vertical electron affinities (VEA) in Figures 2 and 3, respectively, which are defined by: where the E term represents the total energy with zero-point vibrational corrections. It is well known that a larger Eb value indicates a higher chemical stability of cluster. We can see from Figure 2a that the average binding energies of AM@GenAu (AM = Na and K) have almost similar values for each cluster size, which are slightly lower than that of Li@GenAu by ~0.03-0.10 eV. However, all of the AM@GenAu (AM = Li, Na, and K) clusters show the same increased trends. For example, their average binding energies dramatically increase up to the size of n = 5 and smoothly increase with the size n = 6-13, indicating that the large-sized doped clusters are more stable than the small-sized ones, especially for the n = 10 and 12 clusters. The dissociation energy (De) with the removal of the AM atoms is another useful physical quantity that can also reflect the relative stability of AM@GenAu. It is apparent from Figure 2b that the n = 3, 5, and 10 clusters are the local maximum De peaks, which are more stable than their corresponding neighbors. Meanwhile, the De energetic ordering by alkali metals is Li > K > Na. The HOMO-LUMO gaps (GAPs) of the AM@GenAu clusters are listed in Figure 3a. One can find that the gaps (2.04-2.81 eV) in the small-sized clusters with the size n ≤ 11 are larger than those, 1.74- 1.80 eV, in the large-sized clusters. Clearly, the HOMO-LUMO gap values with n = 12 and 13 lie in the typically optical region (e.g., less than 2 eV), making these clusters attractive for the clusterassembled optoelectronic materials, which are consistent with previous reports [48], e.g., Zn@Ge12 and Cd@Sn12. The vertical ionization potentials (VIPs) and vertical electron affinities (VEAs) of AM@GenAu (n = 2-13) are considered to explore the dependence of electronic structures on the cluster size. The VIP can be evaluated by the energy difference between the optimized neutral species and singlepoint cationic species at the optimized neutral geometry. The VEA can be computed by adding one electron to the neutral species in its equilibrium geometry and taking their energy differences. The calculated VIPs and VEAs of the most stable clusters are plotted in Figure 3b. As shown in Figure 3b, the curve reveals that there is a gradually decreased behavior for VIPs along with increasing number of Ge atom, and the cluster with small VIPs (e.g., n = 12) will be more close to a metallic species [49]; reversely, there is a gradually increased trend for VEAs. Similar to the Eb and De values, the Li@GenAu clusters give the larger VIPs and VEAs than the Na-and K-based complexes. It is notable that the It is well known that a larger E b value indicates a higher chemical stability of cluster. We can see from Figure 2a that the average binding energies of AM@Ge n Au (AM = Na and K) have almost similar values for each cluster size, which are slightly lower than that of Li@Ge n Au by~0.03-0.10 eV. However, all of the AM@Ge n Au (AM = Li, Na, and K) clusters show the same increased trends. For example, their average binding energies dramatically increase up to the size of n = 5 and smoothly increase with the size n = 6-13, indicating that the large-sized doped clusters are more stable than the small-sized ones, especially for the n = 10 and 12 clusters. The dissociation energy (D e ) with the removal of the AM atoms is another useful physical quantity that can also reflect the relative stability of AM@Ge n Au. It is apparent from Figure 2b that the n = 3, 5, and 10 clusters are the local maximum D e peaks, which are more stable than their corresponding neighbors. Meanwhile, the D e energetic ordering by alkali metals is Li > K > Na. The HOMO-LUMO gaps (GAPs) of the AM@Ge n Au clusters are listed in Figure 3a. One can find that the gaps (2.04-2.81 eV) in the small-sized clusters with the size n ≤ 11 are larger than those, 1.74-1.80 eV, in the large-sized clusters. Clearly, the HOMO-LUMO gap values with n = 12 and 13 lie in the typically optical region (e.g., less than 2 eV), making these clusters attractive for the cluster-assembled optoelectronic materials, which are consistent with previous reports [48], e.g., Zn@Ge 12 and Cd@Sn 12 . The vertical ionization potentials (VIPs) and vertical electron affinities (VEAs) of AM@Ge n Au (n = 2-13) are considered to explore the dependence of electronic structures on the cluster size. The VIP can be evaluated by the energy difference between the optimized neutral species and single-point cationic species at the optimized neutral geometry. The VEA can be computed by adding one electron to the neutral species in its equilibrium geometry and taking their energy differences. The calculated VIPs and VEAs of the most stable clusters are plotted in Figure 3b. As shown in Figure 3b, the curve reveals that there is a gradually decreased behavior for VIPs along with increasing number of Ge atom, and the cluster with small VIPs (e.g., n = 12) will be more close to a metallic species [49]; reversely, there is a gradually increased trend for VEAs. Similar to the E b and D e values, the Li@Ge n Au clusters give the larger VIPs and VEAs than the Na-and K-based complexes. It is notable that the VIPs of AM@Ge 10 Au (AM = Na and K) are 5.48 eV and 5.32 eV, respectively, smaller than that of Li (5.62 eV) and Na (5.40 eV), respectively, indicating these clusters should have an electronic structure reminiscent of an alkali atom. Density of States To further investigate the electronic effect of the alkali metals adsorption on the clusters' surface, we explored the total (TDOS) and partial (PDOS) density of states, taking the stable Li@Ge 12 Au (12B) cluster as a typical case, shown in Figure 4, which includes all atomic contributions (Li, Ge, and Au) of the clusters to PDOS, while the contributions of different atomic shells (s, p, d) to the molecular orbitals (MOs) are also discussed. VIPs of AM@Ge10Au (AM = Na and K) are 5.48 eV and 5.32 eV, respectively, smaller than that of Li (5.62 eV) and Na (5.40 eV), respectively, indicating these clusters should have an electronic structure reminiscent of an alkali atom. The main valence molecular orbitals (0.02 e/a.u. 3 ) of the cluster are depicted in Figure 4. We can clearly see that, for the stable Li@Ge 12 Au (12B) clusters, the σ-type HOMO has an interesting double pumpkin shape with the electron cloud delocalized on the two sides of the Ge-Ge-Au cross section, mostly involving the five-membered Ge 5 rings, due to the hybridization of 12.38% Ge-4s and 85.51% Ge-4p states, whereas the σ-type LUMO originates mainly from the dominant interactions between Au-5d xz state and 4s4p hybridized orbitals of the Ge 12 cage. The Au-5d xz orbital has only 3.37% contribution to the LUMO, while the 4s4p hybridized orbitals of 12 Ge atoms have 93.70% contribution to the LUMO. Obviously, the small HOMO-LUMO gap (1.74 eV) of the cluster should be largely related to the electron distributions of the frontier molecular orbitals, and the hybridization of Ge atoms with metal dopants can increase the HOMO-LUMO gap, and enhance the chemical stability of the cluster [22]. The strongest DOS band at around −10.42 eV mainly ascribes to the σ-type HOMO-19 orbital, which comes mostly from different atomic shells, e.g., 66.77% Au-d yz , 23.19% Ge-4s, and 2.92% Ge-4p y states. In addition, the Au-4d states have strong interactions (28.86%, 46.57%, and 14.26%) with Ge atoms for HOMO-22 (−13.35 eV), HOMO-21 (−11.91 eV), and HOMO-12 (−9.19 eV) orbitals, respectively, indicating that the 4d states of Au atom in these orbitals are involved in the chemical bonding. One can see that the valence electron orbitals of Li@Ge 12 Au are divided into two different subsets occupied by σ and π electrons, and the doped cluster contains eight valence π-electrons in four molecular orbitals, e.g., −9.19(A , π), −6.42(A , π), −6.08(A , π), and −6.04(A , π) orbitals, which belong to one 1S-and three 1P-subshells of the doped cluster, according to the electron shell model [22,50,51]. Chemical Bonding Analysis In order to further explore the bonding properties of metal dopant and Ge atoms, we performed the adaptive natural density partitioning (AdNDP) analysis proposed by Zubarev and Boldyrev [44], taking the most stable Li@Ge 12 Au (12B) cluster as an example. The AdNDP method is based on the concept of electron pairs as the main elements of chemical bonding, and represents the electronic structure in terms of n-center two-electron (nc-2e) bonds, in which the n values go from one (lone-pair) to the maximum number of clusters. This method has been successfully applied to gain insight into the bonding characters not only for fullerene derivatives [41], but also for boron [52][53][54][55] and transition-metal doped Si/Ge clusters [22]. In the method, the occupation numbers (ONs) indicate the number of electrons per bond, and the ONs exceed the established threshold value and are close to the ideal limit of 2.00 |e|. As mentioned above, the Li@Ge 12 Au (12B) cluster has 60 valence electrons and thus 30 chemical bonds with each bond occupied by two electrons. According to the AdNDP results, there are five typical d-lone pairs (LPs) on Au with ON = 1.93-1.98 |e|, and eight s-lone pairs on the Ge atoms with ON = 1.75-1.78 |e| (see Figure 5), which reveals that electrons on the Ge atoms are not completely localized into lone pairs, but partially participate into the localized or delocalized bondings. As can be seen from Figure 5, ten two-center two-electron (2c-2e) σ-bonds are localized on clusters' surface with ON = 1.74-1.91 |e|. Meanwhile, a total of seven delocalized σ-bonds can be readily identified: two 3c-2e σ-bonds (ON = 1.74 |e|) on the Ge-Au-Ge triangles under the cage structure, two 4c-2e σ-bonds (ON = 1.71 |e|) on the top structure, two 5c-2e σ-bonds (ON = 1.81 |e|) on the tetragonal pyramid of left structure and one 6c-2e σ-bonds (ON = 1.87 |e|) on the tetragonal bipyramid of right structure with two vertices occupied by Au and Li atoms. Obviously, the localized 2c-2e σ-bonds are mainly located on the clusters' surface, whereas all of the delocalized σ-bonds are always involved in the endohedral Au dopant. This indicates that the delocalized Au-Ge interactions will be responsible for the structural stabilization of the lowest-energy Li@Ge 12 Au (12B) cluster. of Li@Ge12Au are divided into two different subsets occupied by σ and π electrons, and the doped cluster contains eight valence π-electrons in four molecular orbitals, e.g., −9.19(A′, π), −6.42(A′, π), −6.08(A″, π), and −6.04(A′, π) orbitals, which belong to one 1S-and three 1P-subshells of the doped cluster, according to the electron shell model [22,50,51]. Chemical Bonding Analysis In order to further explore the bonding properties of metal dopant and Ge atoms, we performed the adaptive natural density partitioning (AdNDP) analysis proposed by Zubarev and Boldyrev [44], taking the most stable Li@Ge12Au (12B) cluster as an example. The AdNDP method is based on the concept of electron pairs as the main elements of chemical bonding, and represents the electronic structure in terms of n-center two-electron (nc-2e) bonds, in which the n values go from one (lone-pair) to the maximum number of clusters. This method has been successfully applied to gain insight into the bonding characters not only for fullerene derivatives [41], but also for boron [52][53][54][55] and transition-metal doped Si/Ge clusters [22]. In the method, the occupation numbers (ONs) indicate the number of electrons per bond, and the ONs exceed the established threshold value and are close to the ideal limit of 2.00 |e|. As mentioned above, the Li@Ge12Au (12B) cluster has 60 valence electrons and thus 30 chemical bonds with each bond occupied by two electrons. According to the AdNDP results, there are five typical dlone pairs (LPs) on Au with ON = 1.93-1.98 |e|, and eight s-lone pairs on the Ge atoms with ON = 1.75-1.78 |e| (see Figure 5), which reveals that electrons on the Ge atoms are not completely localized into lone pairs, but partially participate into the localized or delocalized bondings. As can be seen from Figure 5, ten two-center two-electron (2c-2e) σ-bonds are localized on clusters' surface with ON = 1.74-1.91 |e|. Meanwhile, a total of seven delocalized σ-bonds can be readily identified: two 3c-2e σ-bonds (ON = 1.74 |e|) on the Ge-Au-Ge triangles under the cage structure, two 4c-2e σ-bonds (ON = 1.71 |e|) on the top structure, two 5c-2e σ-bonds (ON = 1.81 |e|) on the tetragonal pyramid of left structure and one 6c-2e σ-bonds (ON = 1.87 |e|) on the tetragonal bipyramid of right structure with two vertices occupied by Au and Li atoms. Obviously, the localized 2c-2e σ-bonds are mainly located on the clusters' surface, whereas all of the delocalized σ-bonds are always involved in the endohedral Au dopant. This indicates that the delocalized Au-Ge interactions will be responsible for the structural stabilization of the lowest-energy Li@Ge12Au (12B) cluster. Spherical Aromaticity It is well known that aromaticity is one of important measures of many compounds, and analogous aromatic compounds commonly take on high chemical stability relative to non-aromatic ones. For planar structures, the aromatic characters are identified by using the 4N + 2 Hückel rule [56]. However, In 2000, Hirsch et al. [57] proposed another electron counting rule for three-dimensional (3D) structures, namely 2(N + 1) 2 rule, which has been proven as an effective aromaticity criterion, with the extension to inorganic clusters [22,50,58]. In the proposal, the π-electron system of the spherical species can be approximately regarded as a spherical electron gas, which surrounds the spherical surface [59]. According to the Pauli principle, if the number of π electrons in a spherical species satisfies the 2(N + 1) 2 counting rule, and then the 3D structures can be considered to be aromatic. Figure 4 shows the molecular orbitals of the stable Li@Ge 12 Au (12B) cluster. One can note that the valence electron orbitals are divided into two different orbital sets occupied by σ or π electrons. In particular, the Li@Ge 12 Au (12B) cluster contains eight π-electrons in four molecular orbitals as mentioned above, and these π-electrons fully satisfy the 2(N π + 1) 2 [N π = 1] counting rule. As a result, the π-electron species makes Li@Ge 12 Au spherically aromatic, and the aromatic character can be regarded as one of the main reasons in the structural stabilization of endohedrally cluster. However, the 2(N + 1) 2 electron counting rule cannot be solely applied to explore the aromaticity of compounds, e.g., the bianionic Si 12 2− cluster contains eight π-electrons, but is predicted to be antiaromatic [60]. Therefore, the aromaticity of the Li@Ge 12 Au (12B) cluster has to be further confirmed by the nucleus-independent chemical shifts (NICS) calculations proposed by Chen and co-workers [61], on the basis of magnetic shieldings with GIAO approximation. Aromaticity is expected to be estimated by a negative NICS value, whereas antiaromaticity is expected to be estimated by a positive NICS value. In the study, a ghost atom is placed at the center of spherical geometry to compute the NICS value. At the B3LYP/def-TZVP level of theory, it is found that the NICS(0) value is −295.7 ppm for the Li@Ge 12 Au (12B) cluster. Thus, the aromatic character of the cluster, identified by the 2(N π + 1) 2 [N π = 1] counting rule, can be confirmed by the largely negative NICS values, and the large diatropic NICS(0) value can contribute to the high chemical stability of the geometry. Linear and Nonlinear Optical Properties In order to explore the L&NLO behavior of the alkali metals-adsorbed gold-doped germanium clusters, we have computed the static dipole moments (µ 0 ), isotropic polarizabilities (α iso ), and static first hyperpolarizability (β tot ) using the long-range corrected CAM-B3LYP functional in conjunction with the def2-TZVPD basis sets, shown in Figure 6. According to the results, it shown in Figure 6 that the doping of alkali metals (Li, Na and K) on the clusters' surface largely enhances the electric properties of the considered systems. It is evident that the dipole moments of AM@Ge n Au take on the fluctuating behaviors with the increasing number of Ge atom, and the local maximum peaks are found at n = 3, 7, and 9 for the alkali-based complexes, which are much larger than those of bare Ge n Au clusters, see Figure 6a. However, what is different is that the isotropic polarizability of all these complexes increases linearly with increasing cluster size, as shown in Figure 6b, similar to the Li 2 -doped boron nitride clusters (n = 4-8) [8]. In particular, the doping of the K atom can be predicted to improve the α iso values by~7-41%, indicating that the alkali metal atoms provide the possibility for inducing the isotropic polarizability. Additionally, according to the hard soft acids bases (HSAB) principle [62], the species with small HOMO-LUMO gaps are less hard and more polarizable, and this reveals that the large polarizabilities of the studied species have close relations with their small energy gaps. As shown in Figure 6c, the variation of the first hyperpolarizabilities (β tot ) is interesting and has become the focus of our attention. One can see that the β tot values of all the complexes distinctly decrease up to the size of n = 6, and are slightly fluctuating for larger sizes, with the exception of local maximum peaks, e.g., n = 7, 9, and 12. Compared with the bare Ge n Au clusters, it is found that the alkali metals can dramatically enhance β tot , but there are strong dependencies on the cluster size, and they are more sensitive to the geometric structures. As shown in Figure 6c, the ordering of the enhanced β tot values by alkali metals is nearly K > Na > Li, with exception of the Na@Ge 7 Au and Na@Ge 12 Au clusters. Clearly, the first hyperpolarizabilities of AM@Ge n Au (AM = Na and K) are large enough to establish their strong nonlinear optical response, due to increased β tot values (~140-6111%) induced by the two alkali metal atoms. Especially, the Na@Ge n Au (n = 7 and 12) and K@Ge n Au (n = 2 and 3) clusters possess remarkable NLO responses, with the β tot values of 13,050, 6288, 9602, and 8812 a.u., respectively. Furthermore, the largest β tot value (13,050 a.u.) of Na@Ge 7 Au is comparable to those of Li 2 F (12,347 a.u.) [7] and Li 2 @BN-clusters(8,0) (12,282 a.u.) [8]. On the other hand, the largest β tot values is also about 1.87 times larger than that of the Na@Si 9 Nb + cluster (6987 a.u.), which has the largest β tot value among the AM@Si n Nb + clusters reported previously [16], and it shows that the germanium-based clusters doped by alkali metals provide the greater β tot than the silicon-based clusters. This result suggests that the germanium-based clusters with the large β tot , served as building blocks with tunable properties, may be promising for the design of novel macroscopic NLO materials. enhanced βtot values by alkali metals is nearly K > Na > Li, with exception of the Na@Ge7Au and Na@Ge12Au clusters. Clearly, the first hyperpolarizabilities of AM@GenAu (AM = Na and K) are large enough to establish their strong nonlinear optical response, due to increased βtot values (~140-6111%) induced by the two alkali metal atoms. Especially, the Na@GenAu (n = 7 and 12) and K@GenAu (n = 2 and 3) clusters possess remarkable NLO responses, with the βtot values of 13,050, 6288, 9602, and 8812 a.u., respectively. Furthermore, the largest βtot value (13,050 a.u.) of Na@Ge7Au is comparable to those of Li2F (12,347 a.u.) [7] and Li2@BN-clusters(8,0) (12,282 a.u.) [8]. On the other hand, the largest βtot values is also about 1.87 times larger than that of the Na@Si9Nb + cluster (6987 a.u.), which has the largest βtot value among the AM@SinNb + clusters reported previously [16], and it shows that the germanium-based clusters doped by alkali metals provide the greater βtot than the silicon-based clusters. This result suggests that the germanium-based clusters with the large βtot, served as building blocks with tunable properties, may be promising for the design of novel macroscopic NLO materials. To further understand the NLO behavior, we have performed the TDDFT calculations on the clusters with large βtot values at the CAM-B3LYP/def2-TZVPD level of theory. The most widely common two-level model is considered and it gains more insight into the NLO response [40,63]. The static first hyperpolarizability can be expressed by the following equation: where ∆E, f0, and ∆μ are the transition energy, the oscillator strength, and the difference in dipole moment between the ground state and the crucial excited state, respectively. Of which the crucial excited state is specified by the largest f0 value, and the third-power of transition energy is inversely proportional to the βtot value. The calculated ∆E, f0, and ∆μ values by TDDFT for the crucial excited states are listed in Table 1. One can see that the ∆E values for K@Ge2Au, K@Ge3Au, and Na@Ge7Au To further understand the NLO behavior, we have performed the TDDFT calculations on the clusters with large β tot values at the CAM-B3LYP/def2-TZVPD level of theory. The most widely common two-level model is considered and it gains more insight into the NLO response [40,63]. The static first hyperpolarizability can be expressed by the following equation: where ∆E, f 0 , and ∆µ are the transition energy, the oscillator strength, and the difference in dipole moment between the ground state and the crucial excited state, respectively. Of which the crucial excited state is specified by the largest f 0 value, and the third-power of transition energy is inversely proportional to the β tot value. The calculated ∆E, f 0 , and ∆µ values by TDDFT for the crucial excited states are listed in Table 1. One can see that the ∆E values for K@Ge 2 Au, K@Ge 3 Au, and Na@Ge 7 Au are 6.056 eV, 5.702 eV and 3.798 eV, respectively. The smallest ∆E corresponds to the first excited state of the Na@Ge 7 Au cluster, which is in accordance with its largest β tot value. Thus, the two-level approximation can be used for qualitative description of the polarization mechanism, and the low transition energy is the most significant factor in designing the nonlinear optical materials. In order to explore the origin of the second-order NLO response, the molecular orbitals (MOs) features involving in the crucial transitions are presented in Figure 7. One can note that the crucial excitation of K@Ge 2 Au originates from a HOMO-3 → LUMO+2 (42%) transition at~205 nm, and the two MOs are mainly localized on the two Ge atoms with s-lone pairs, and the Au atom with the d xy -and p y -lone pairs, respectively. For K@Ge 3 Au, however, the large absorption band at 217 nm with f 0 = 0.297 ascribes to two mainly electron transitions, listed in Table 1, being the HOMO → LUMO+12 (18%) and HOMO-5 → LUMO+2 (12%) transitions, respectively. Of which the HOMO is mostly delocalized on the clusters surface (rhombic moiety) via the σ chemical bonds (e.g., Ge-Ge, Au-Ge, etc.), whereas the LUMO+12 is intensely localized on the K atom with the p x -lone pair (see Figure 7). It is noteworthy that the strong transitions can be designated as intramolecular charge transfer, and this should be related to the large β tot value. Moreover, the difference of charge transfer can give rise to various electronic effects, but all of other MOs contribute little to electronic transitions of the cluster (≤8%). Table 1 lists the major contributions of electronic transition of the Na@Ge 7 Au cluster in crucial excited states. Its strongest transitions can be assigned to be composed by three mainly mixed excitations of HOMO-7 → LUMO (28%), HOMO-6 → LUMO (17%), and HOMO-2 → LUMO+4 (14%). From Figure 7, it is evident that HOMO-7 is delocalized on QP's surface via σ bonds, and HOMO-2 and HOMO-6 can be formed by two π bonds, located at the inner and outer QP, whereas the electron distributions of LUMO are mainly associated with the Na-Au bond instead of other atoms, and LUMO+4 is mostly delocalized on the QP's surface with additional distribution on Au atom. Therefore, the situation of intramolecular charge transfer of cluster directly influences the transition energy, which is a decisive factor that leads to a considerably large first hyperpolarizability. Nanomaterials 2017, 7, 184 11 of 15 are 6.056 eV, 5.702 eV and 3.798 eV, respectively. The smallest ∆E corresponds to the first excited state of the Na@Ge7Au cluster, which is in accordance with its largest βtot value. Thus, the two-level approximation can be used for qualitative description of the polarization mechanism, and the low transition energy is the most significant factor in designing the nonlinear optical materials. In order to explore the origin of the second-order NLO response, the molecular orbitals (MOs) features involving in the crucial transitions are presented in Figure 7. One can note that the crucial excitation of K@Ge2Au originates from a HOMO-3 → LUMO+2 (42%) transition at ~205 nm, and the two MOs are mainly localized on the two Ge atoms with s-lone pairs, and the Au atom with the dxyand py-lone pairs, respectively. For K@Ge3Au, however, the large absorption band at ~217 nm with f0 = 0.297 ascribes to two mainly electron transitions, listed in Table 1, being the HOMO → LUMO+12 (18%) and HOMO-5 → LUMO+2 (12%) transitions, respectively. Of which the HOMO is mostly delocalized on the clusters surface (rhombic moiety) via the σ chemical bonds (e.g., Ge-Ge, Au-Ge, etc.), whereas the LUMO+12 is intensely localized on the K atom with the px-lone pair (see Figure 7). It is noteworthy that the strong transitions can be designated as intramolecular charge transfer, and this should be related to the large βtot value. Moreover, the difference of charge transfer can give rise to various electronic effects, but all of other MOs contribute little to electronic transitions of the cluster (≤8%). Table 1 lists the major contributions of electronic transition of the Na@Ge7Au cluster in crucial excited states. Its strongest transitions can be assigned to be composed by three mainly mixed excitations of HOMO-7 → LUMO (28%), HOMO-6 → LUMO (17%), and HOMO-2 → LUMO+4 (14%). From Figure 7, it is evident that HOMO-7 is delocalized on QP's surface via σ bonds, and HOMO-2 and HOMO-6 can be formed by two π bonds, located at the inner and outer QP, whereas the electron distributions of LUMO are mainly associated with the Na-Au bond instead of other atoms, and LUMO+4 is mostly delocalized on the QP's surface with additional distribution on Au atom. Therefore, the situation of intramolecular charge transfer of cluster directly influences the transition energy, which is a decisive factor that leads to a considerably large first hyperpolarizability. Conclusions In the present work, we have systematically investigated the structures, chemical stabilities and nonlinear optical properties as well as the chemical bonding and electronic structures of a series of alkali metals-adsorbed gold-germanium bimetallic clusters using the hybrid DFT-B3LYP method. Structurally, it has been determined that the adsorption of alkali metal atoms does not largely affect the structural framework of the gold-germanium clusters, and the alkali metals prefer energetically to be attached on clusters' surface or edge. The Li-adsorbed bicapped pentagonal prism of the Li@Ge 12 Au cluster is electronically stable due to obeying the spherical aromaticity counting rule. Meanwhile, the molecular orbitals analysis reveals that the Au-4d state has strong interactions with the germanium atoms, and the hybridization between the Ge and Au atoms can enhance the chemical stability of the bimetallic cluster. This AdNDP analysis indicates that the localized 2c-2e σ-bonds are located on the clusters' surface, while all the delocalized σ-bonds are involved in the endohedral Au dopant, which are responsible for structural stabilization of Li@Ge 12 Au. The static first hyperpolarizabilities are strongly related to the cluster size and geometric structure, and the AM@Ge n Au (AM = Na and K) clusters display the large β tot values, which are enough to establish their strong nonlinear optical behaviors, especially for Na@Ge n Au (n = 7 and 12) and K@Ge n Au (n = 2 and 3). The present results will inevitably stimulate future experimental and theoretical studies of germanium-based semiconductor clusters doped by alkali metals for the design of novel nonlinear optical materials.
11,151
2017-07-01T00:00:00.000
[ "Chemistry", "Materials Science", "Physics" ]
Inline spectrometer for shot-by-shot determination of pulse energies of a two-color X-ray free-electron laser An inline spectrometer has been developed to monitor each pulse energy of a two-color X-ray beam. Introduction Recently, two-color operation (Hara et al., 2013;Lutman et al., 2014) of the X-ray free-electron laser (XFEL) has been realised at SACLA and LCLS (Emma et al., 2010). These unique sources should open up advanced XFEL applications, such as X-ray pump/X-ray probe experiments. The data analysis requires information about each pulse energy of the two colors. In addition, the pulse energies, which fluctuate shot by shot, must be monitored during the experiment. Since the total pulse-energy monitor assumes a single-color beam Emma et al., 2010), a new method is required to measure the spectrally resolved pulse energy. A bent-crystal spectrometer (Zhu et al., 2012) may be used for smaller photon-energy separations, e.g. several eV as in the case of LCLS. However, there is no method for a keVseparated two-color beam. In this short communication we report absolute and shot-by-shot monitoring of two-color pulse energies at 8.05 and 9.1 keV using a polycrystalline inline spectrometer. Fig. 1(a) shows a schematic diagram of the inline spectrometer. The basic design is the same as for the beamline wavelength monitor , but is modified to be compact and portable. The two-color beam is intercepted by a thin polycrystalline diamond film. A 15 mm-thick nanodiamond film was CVD-grown on a silicon substrate, and then the substrate was removed by etching (Tono et al., 2011). The transmittance is calculated to be more than 97% for photon energies above 8 keV (Henke et al., 1993), enabling inline monitoring. A small part of the beam is diffracted kinematically into different directions according to the Bragg condition. A multi-port CCD (MPCCD) detector (Kameshima et al., 2014) captures the diffracted beam image at the repetition rate of SACLA (30 Hz). The film-to-MPCCD distance is ISSN 1600-5775 adjustable to obtain a proper separation between the two beam images. Experimental setup The performance of the inline spectrometer was evaluated at beamline BL3 of SACLA . The twocolor beam was produced by the so-called split-undulator scheme (Hara et al., 2013). The relative pulse-energy ratio can be tuned by changing the numbers of undulator segments. In this experiment, six upstream segments of the undulator were used for a weak beam at 9.1 keV, and the remaining fifteen were used for a strong beam at 8.05 keV. The inline spectrometer must be placed downstream of the final aperture, which was a four-jaw slit in this experiment, because the source-toaperture distance, and therefore the solid angle accepted by the aperture, depends on the color in the split-undulator scheme. This is the reason why we have constructed the present portable inline spectrometer. Result and discussion The inset of Fig. 1(b) shows the typical image of the diamond 220 diffraction. The photon energy disperses along the X direction. The Y direction corresponds to the spatial distribution. Strictly speaking, the diffraction image is an arc, which is a part of the Debye-Scherrer cone. However, the effect of the curvature is minor compared with the broad peak width, and is ignored below. The count of each pixel is added vertically to obtain the spectrum. Fig. 1(b) shows spectra obtained in different shots. The peak heights change shot by shot, while the peak photon energies are almost fixed. The spectrum can be reproduced well by a sum of two Lorentzian functions and a constant, where A is the weight, À is the width and X 0 is the center pixel of the peak. The constant B represents the background due to air scattering. The peak widths are estimated to be about 150 eV. This is larger than the spectral bandwidth of $50 eV measured separately by the beamline monochromator. The spread is accounted for by the grain size of the diamond film. The width imposes the lower limit of the photon-energy separation between the two colors, which we estimate as about 400 eV. Nevertheless, the peak position can be determined with a nominal resolution of 2.1 eV, which is given by the pixel size of 50 mm and the film-to-MPCCD distance. The readout of the MPCCD is proportional to both the photon energy and the photon number (Kameshima et al., 2014), so that the fitting parameter, A, is proportional to the pulse energy. However, A 1 and A 2 cannot be compared without correcting the photon-energy-dependent factors, such as the transmittance of air, T, the structure factor, F, and the quantum efficiency of the MPCCD, Q. Correcting these factors, the absolute pulse energy may be given by where i = 1, 2 denotes the color, and C is a common conversion constant. The relative pulse-energy variation of each color can be determined without knowledge of C. The structure factor appears as jF i j 2 because of the kinematical nature of diffraction. If we performed an additional measurement in the singlecolor mode, we could determine C directly by a total pulseenergy monitor. Here, we discuss another approach without changing the color mode. In general, the sensitivity of the total pulse-energy monitor depends on the photon energy. When a pulse-energy monitor measures the two-color beam, the output signal, S, may be given by a sum of that for each color: Here, D is the known conversion coefficient of the total pulseenergy monitor. Combining (3) with (2), C can be determined from A 1;2 and S, which are to be measured experimentally by the inline spectrometer and the total pulse-energy monitor, respectively. In the present experiment, we used a beam-position monitor (BPM) as the total pulse-energy monitor . The BPM consists of a thin diamond film and quadrant photodiodes, which measure the backscattered X-rays by the film. The pulse energy is determined from S, the sum of the four charges measured by the photodiodes. The output of the photodiode is not the current but the charge, because the XFEL is a pulsed source. The conversion coefficient of the BPM, D, is calibrated to a cryogenic radiometer (Kato et al., 2012). Fig. 2(a) shows the relation between the measured S by the BPM and the calculated S from A 1;2 using (2) indicating the validity of our analysis. The conversion coefficient is estimated to be C = 5.0883 AE 0.00162 from the slope of the linear fitting. Now, the shot-by-shot pulse energy for each color can be calculated using (2). The ðP 1 ; P 2 Þ distribution of 10000 shots is plotted in Fig. 2(b). A negative correlation is found between P 1 and P 2 , which is considered to be a natural consequence of the fact that the two colors originate from the same electron beam. Finally, we discuss the error of P 1;2 . The error arises from the fitting parameters, C and A 1;2 , and the conversion coefficients, D 1;2 . The average photon numbers at the peak are 4.2 photons per pixel for 8.05 keV and 0.25 for 9.1 keV. The shot noise dominates the fitting error of A 1;2 , while the readout noise of the MPCCD is negligible. The uncertainty of A 1 and A 2 are estimated to be 3.1% and 7.5%, respectively. The larger error for 9.1 keV is due to the weaker signal. The relative uncertainty of P 1;2 is determined by that for A 1;2 , because the error of C is much smaller. The uncertainty of the absolute pulse energy depends on that of the total pulse-energy monitor as well. When we adapt 2.5% evaluated at 9.6 keV to that of D 1;2 (Kato et al., 2012), we estimate the uncertainty of P 1;2 to be 4.0% for 8.05 keV and 8.0% for 9.1 keV. Conclusion In conclusion, we have developed and operated successfully an inline spectrometer to monitor the relative two-color pulse energies of SACLA shot-by-shot. Furthermore, using a calibrated BPM, the absolute two-color pulse energies are determined, which will serve for quantitative analysis of twocolor XFEL experiments. Although the photon-energy resolution of the present inline spectrometer is not enough for smaller separations, it can be improved by increasing the grain size of the diamond film.
1,899.8
2016-01-01T00:00:00.000
[ "Physics" ]
Targeted mRNA demethylation in Arabidopsis using plant m6A editor Background N6-methyladenosine (m6A) is an important epigenetic modification involved in RNA stability and translation regulation. Manipulating the expression of RNA m6A methyltransferases or demethylases makes it difficult to study the effect of specific RNA methylation. Results In this study, we report the development of Plant m6A Editors (PMEs) using dLwaCas13a (from L. wadei) and human m6A demethylase ALKBH5 catalytic domain. PMEs specifically demethylates m6A of targeted mRNAs (WUS, STM, FT, SPL3 and SPL9) to increase mRNAs stability. In addition, we discovered that a double ribozyme system can significantly improve the efficiency of RNA editing. Conclusion PMEs specifically demethylates m6A of targeted mRNAs to increase mRNAs stability, suggesting that this engineered tool is instrumental for biotechnological applications. Supplementary Information The online version contains supplementary material available at 10.1186/s13007-023-01053-7. are highly conserved across the plant kingdom [5].m6A mark is involved in regulating mRNA processing, development, and stress response in plants [5].m6A methylation appears to be a useful strategy to regulate gene expression, plant development and physiological processes [6]. Due to the lack of effective means to detect m6A methylation, the research on m6A methylation had long been stagnant.Thus far, studies aiming to manipulate RNA m6A methylation have relied on modulating the expression of RNA methyltransferases or demethylases [7], which has the shortcomings of affecting broad RNA methylation, making it difficult to study the effect of specific RNA methylation.Therefore, it is important to create tools in plants that allow the manipulation of RNA methylation in a more locus-specific manner.CRISPR/Cas system is a powerful tool for understanding biological function and dynamic variations of nucleic acids [8].Catalytically dead Cas9 (dCas9), retains sitespecific binding but lacks DNA cutting activity.The Background RNA methylation regulates gene expression at the post transcriptional level and is an important epigenetic regulatory mode [1].Over 200 types of post-transcriptional RNA modifications have been identified in plants.N6-methyladenosine (m6A) is the most common type of RNA methylation modification on higher biological mRNAs [2,3].m6A methylation is reversibly regulated by methyltransferase and demethylases complex [4].These components of m6A modification complex nuclease-inactive DNA-targeting Cas9 (dCas9) fused with epigenetic regulatory enzymes can manipulate epigenetic properties at specific loci, including DNA methylation and histone methylation/acetylation status [9].A protein family related to Cas9, the Cas13 protein family, was shown to natively target RNA [10].Similar to Cas9, nuclease-inactive RNA-targeting Cas13 (dCas13) retains its crRNA-guided RNA binding ability [10][11][12].In this study, we used catalytically dead LwaCas13a (from L. wadei) [10] and the human m6A demethylase ALKBH5 catalytic domain [13] to develop a plant m6A editors (PMEs) that target demethylation of specific mRNAs in plants.We further applied the PMEs system on endogenous mRNAs in Arabidopsis and successfully suppress target mRNAs degradation, suggesting that this engineered tool is instrumental for biotechnological applications. Results To construct RNA editors, we synthesized human m6A demethylase ALKBH5 catalytic domain (66-292 aa) and fused it to the C-terminus of inactive dLwa-Cas13a (R474A and R1046A)-msfGFP structure using the unstructured 16-residue peptide XTEN as a linker [11,12] (Fig. 1A; Additional file 1).Nuclear localization signal (NLS) peptides were added to the N-terminus of dLwaCas13a and C-terminus of ALKBH5.Catalytically inactive dLwaCas13a can be used as a programmable RNA binding protein.msfGFP was used to enhance the stability of the dLwaCas13a [10].We expressed the dLwaCas13a-msfGFP-XTEN-ALKBH5 fusion sequence under the CMV promoter and the CRISPR RNA (crRNA) sequence under the Arabidopsis RNA polymerase III promoter AtU6 (Fig. 1A).The sequences of the crRNA might be highly conserved and important for Cas13a activity [10].3' terminal poly U sequences present in RNA polymerase III-transcribed crRNAs are immediately adjacent to protospacer sequences involved in RNA recognition [14].To meet the sequence specificity, we also used a double ribozyme system that precisely processes the crRNAs (Fig. 1A-B) [15].Double ribozyme system contained a hammerhead (HH) type ribozyme [16] at the 5'-end, a crRNA, and a hepatitis delta virus (HDV) ribozyme [17] at the 3'-end (Fig. 1C).After the self-cleavage at the predicated sites, the mature crRNA was released (Fig. 1C). To test the PMEs system, we chose WUS, STM, FT, SPL3, and SPL9 as the target genes (Fig. 2A; Additional file 2).Several m6A sites have been identified in the transcript of the five genes [7,18].FT, SPL3, and SPL9 are key activator of flowering [7].m6A methylation of FT, SPL3, and SPL9 mRNAs affects floral transition.STM and WUS are two key shoot apical meristem (SAM) regulators [18].m6A methylation on STM and WUS determines shoot stem cell fate in plants.According to Shen et al. (2016) and Duan et al. (2017), crRNAs sites were designed near the m6A sites of the five genes [7,18] (Fig. 2A).Studies suggested that the secondary structure of crRNA are critical for the editing efficiency of Cas13a/crRNA [10,19,20].Therefore, selection of target sequences should avoid disrupting the secondary structure of crRNA.In this study, the guide sequences targeting the m6A sites of the chosen genes were predicted with the software RNAfold (http://rna.tbi.univie.ac.at/cgi-bin/RNAWebSuite/RNAfold.cgi)(Additional file 3).In addition, we used two classes of construct for expression of crRNAs.Four units of crRNAs expression cassettes under four AtU6 promoters were ligated in tandem (PME-WS and PME-FSS), and four units of crRNA ribozyme cassettes under one AtU6 promoter were ligated in tandem (PME-WS-H and PME-FSS-H) (Fig. 2B).The four constructs together with control vector (PME-MCS) were transformed into Arabidopsis (Col-0) via Agrobacterium Tumefaciensmediated transformation.The T 1 transgenic plants were confirmed by PCR analysis of genomic DNA (Additional file 4).qPCR analysis showed the dLwaCas13a-msfGFP-XTEN-ALKBH5 was expressed in different transgenic lines (Additional file 5).To exclude the effect of transgene on endogenous gene expression, we analyzed the expression levels of orthologs of ALKBH5 in transgenic plants.There are five potential orthologs of human ALKBH5 encoded in the Arabidopsis genome: ALKBH9A, ALK-BH9B, ALKBH9C, ALKBH10A, and ALKBH10B [7].We analyzed the expression of the five genes in WT, PME-MCS #4, and PME-MCS #5, and found that there were no obvious expression changes in the transgenic lines (Additional file 6). We then verified the effect of PMEs on m6A modification of the selected genes in T 3 Arabidopsis transgenic plants.It is generally believed that the m6A level is negatively correlated with target gene expression [3].Therefore, we utilized qPCR assay to detect target gene expression.The results of qPCR showed that the four constructs (PME-WS, PME-FSS, PME-WS-H, and PME-FSS-H) targeting the selected genes can increase mRNA levels of the genes compared to the control vector (PME-MCS) (Fig. 3A-E).Notably, for constructs with the double ribozyme system, the mRNA levels of the selected genes were significantly increased.The results implied that PMEs might modify m6A level of target mRNAs.To determine the m6A level of target mRNAs, we performed SELECT-qPCR assay.SELECT-qPCR is a newly developed method for m6A level detection of target site with low cost and high efficiency.The m6A mark hinder the extension and ligation of template.Further qPCR analysis allows for quantification of relative template abundance after elongation and ligation [3].In this study, SELECT-qPCR analysis revealed that PME-FSS and PME-FSS-H differentially decreased the methylation levels of FT, SPL3, and SPL9 (Fig. 3F-H).In particular, the efficiency of crRNA targeting SPL3 3'UTR was higher than that of other crRNAs (Fig. 3F-H; Additional file 7A-D).Interestingly, PME-FSS-H with double ribozyme system showed higher modification efficiency, fold increasing relative to control.There are multiple m6A sites in SPL3 mRNA, located near 5' and 3' UTR (Fig. 2A).To determine the modification efficiency of these sites by PMEs, we also detected change of m6A levels near SPL3 5' UTR using SELECT-qPCR (Additional file 7E-H).Results showed that PME-FSS and PME-FSS-H could not effectively change the m6A level near SPL3 5' UTR.Similar to Cas9-driven transcriptional activation and base editor [21,22], PMEs system had limited editing scope.FT, SPL3, and SPL9 are key genes regulating Arabidopsis flowering [7].The expression level of the three genes showed a significant positive correlation with flowering in Arabidopsis.The number of rosette leaves at flowering was used to calculate the flowering period [23].We counted the number of rosette leaves at flowering in T 3 transgenic plants.Transgenic plants with PME-FSS or PME-FSS-H had fewer rosette leaves than plants with PME-MCS (Additional file 8 A-B).These results provided further confirmation of the early-flowering phenotype of PME-FSS and PME-FSS-H.Taken together, these data showed that targeted demethylation of functional gene transcripts can be efficiently generated using the PMEs system. Discussion RNA methylation is an important mode of epigenetic regulation that regulates gene expression at the posttranscriptional level [1].As the most common type of RNA methylation modification on higher biological mRNA, m6A methylation affects RNA stability and translational regulation in plants and animals [4].Currently, studies involving m6A modification primarily rely on modulating the expression of RNA methyltransferases or demethylases [7], which has the shortcomings of affecting broad RNA methylation, making it difficult to study the effect of specific RNA methylation.In this study, we successfully applied plant m6A editors (PMEs) to perform targeted demethylation of specific mRNAs in Arabidopsis (Fig. 3A-H; Additional file 7).These observations combined suggested that this technology to a certain extent allows for targeted RNA demethylation, thus avoiding board epigenetic changes.Given the importance of m6A modification for eukaryotic mRNAs, we envisage that the PMEs system will be widely adopted to accelerate plant m6A methylation research. In both mammalian and plant systems, the m6A methylation is crucial: knockdown of the gene METTL3 encoding the m6A methyltransferase causes embryonic death in mice [24].In plants, m6A is also required for normal development, and disruption of the m6A writer subunit leads to embryonic lethality in Arabidopsis [25][26][27][28] and early microspores degeneration of rice [29].In addition, m6A is involved in various other physiological processes.The plant RNA m6A demethylases ALKBH10B and ALKBH9B (homologs of the human m6A demethylase ALKBH5) affect floral transition [7] and viral infection [30].Yu et al. (2021) improved crop yield by introducing the human m6A demethylase FTO to manipulate plant m6A levels [31].PMEs system can specifically modify m6A level of targeted mRNAs, and therefore contributes to mining more m6A sites for high crop yield. Sequences of the crRNA might be highly specific and critical for Cas13a activity [10,19].crRNA transcribed by RNA polymerase III promoters possess 3' terminal poly U sequences, immediately next to protospacer sequences involved in RNA recognition [32].The double ribozyme system was used precisely processes the gRNAs for Cas9 or CPF1 [15,32].By comparing results of PEMs with or without the double ribozyme system, it proves the relatively high efficiency of the double ribozyme system in PMEs (Fig. 3A-H).These findings have also laid the foundation for future usage of RNA polymerase II promoters.RNA polymerase II promoters provide better flexibility in operating temporal and spatial expressions of genes in vivo, and they can bypass short internal termination sites to produce long transcripts compared to RNA polymerase III promoters [33]. A further improvement of this system could be achieved by integrating nuclear export signal (NES), which changes the subcellular location of Cas13a-ALKBH fusion protein and enhances the editing efficiency [11,12].Meanwhile, the use of full-length ALKBH5 or a more active RNA targeting enzyme such as Cas13b in plants will also greatly improve the efficiency of demethylation [11,12].The PMEs system described here hold great promise to change the game play for future RNA regulation research. Conclusion In summary, we developed a plant m6A editors (PMEs) using dLwaCas13a (from L. wadei) and human m6A demethylase ALKBH5 catalytic domain.We found that PMEs specifically demethylates m6A of targeted mRNAs to increase mRNAs stability.In addition, using the double ribozyme system could further improve the RNA editing efficiency of PMEs.The lack of effective tool makes it difficult to study the effect of specific RNA methylation and thus this tool has made a significant advance in the fields of RNA methylation. Plant material and transformation Arabidopsis (Arabidopsis thaliana) ecotype Columbia (Col-0) used in this study was kindly provided by Dr. Shenxiu Du (South China Agricultural University).The constructs were introduced into A. tumefaciens strain EHA105, and then transformed into Arabidopsis (Col-0) by the floral dip method.The seeds were collected from the T 0 plants, screened on 1/2MS plates containing 25 mg/L hygromycin, and transplanted to soil.T 1 plants were confirmed by PCR analysis of genomic DNA.DNA extraction was performed from young leaves of T 1 plants using a CTAB protocol.PME-DEC-F and NOS-DEC-R designed according to the sequence of PME-MCS were used for detecting positive transgenic plants (Additional file 9).Plants were grown in pots of soil in controlled conditions at 22 ℃, under long day (16 h light/8 h dark). The homozygous lines were selected by examining the kanamycin resistance of T 3 seedlings.The number of rosette leaves were counted at flowering time. Vector construction The Arabidopsis codon optimized dLwaCas13a-msfGFP-XTEN-ALKBH 66-292 was synthesized by GENEWIZ.This fragment together with CMV promoter was inserted into EcoRI-linearized pYL1300U-aUf vector and formed PME-MCS using home-made Gibson Assembly mix [34,35].For PME-WS and PME-FSS, each crRNA expression cassette is composed of three parts, contain a snRNA promoter (AtU6), direct repeat (DR), and target sequence.crRNA expressing cassettes were assembled by single step overlap PCR according to previous method [36].Briefly, the first round of PCR (20 µL) used four primers, the universal U-F and gRNA-R (0.2 mM each), and two target sequence-containing chimeric primers DR-R and T#-F (0.1 mM each), 0.2 U of Phanta Max Super-Fidelity DNA Polymerase (Vazyme), and pYLgRNA-AtU6-29 plasmid (20 ng each) as templates, for 25 cycles (94 °C, 10 s; 58 °C, 10 s; 72 °C, 15 s).The second round of PCRs (50 µL) were performed by using 0.4 µL of the first PCR products as templates, and 0.2 mM homologous sequences-containing chimeric primer pairs U-GA-# and Pts-GA-# (0.2 mM each).PME-MCS vector (100 ng) linearized with AscI was mixed with purified PCR products of crRNA expression cassettes (15 ~ 20 ng each), and the mixture was adjusted to 5 µL, and then mixed with 5 µL of home-made Gibson Assembly mix.After incubation at 50 °C for 30 min, the product (1 µL) was transformed into commercial E. coli competent cells.The constructs PME-WS and PME-FSS were confirmed by PCR and DNA sequencing.For PME-WS-H and PME-FSS-H, crRNA expression cassettes, containing a snRNA promoter (AtU6) and four crRNAs flanked with double ribozyme system, were synthesized by GENEWIZ.Synthesized crRNA expression cassettes (15 ~ 20 ng) were inserted into PME-MCS vector (100 ng) linearized with AscI using home-made Gibson Assembly mix [20].The constructs PME-WS-H and PME-FSS-H were confirmed by PCR and DNA sequencing.Primers used in vector construct were listed in Additional file 9. Quantitative reverse transcription PCR (qPCR) analysis For expression analysis, total RNA from Arabidopsis shoot apices (for expressional analysis of WUS, STM, FT, SPL3 and SPL9) and leaves (for expressional analysis of five potential orthologs of human ALKBH5) were isolated using TRIzol reagent (Thermo, USA).Total RNA was used to synthesize cDNA from each sample using M-MLV Reverse Transcriptase (Promega, USA) according to the manufacturer's instructions.Specific primers for qPCR were designed according to the gene CDS sequence and listed in Additional file 9.The qPCR was conducted using ChamQ SYBR qPCR Master Mix (Vazyme, China) with three biological repeats.TUB2 (At5g62690) was used as an internal control to normalize target gene expression.m6A level analysis of target mRNAs SELECT-qPCR is an efficient method for m6A level detection of target mRNAs.SELECT-qPCR was performed according to the method described previously [37].Total RNAs (1500 ng) was mixed with 40 nM up primer, 40 nM down primer and 5 µM dNTP in 17 µL 1× CutSmart buffer (NEB, USA).The RNAs and primers were incubated as following: 90 ºC for 1 min, 80 ºC for 1 min, 70 ºC for 1 min, 60 ºC for 1 min, 50 ºC for 1 min and 40 ºC for 6 min.RNAs and primers mixture were incubated with 3 µL mixture of 0.01 U Bst 2.0 DNA polymerase (NEB, USA), 0. Fig. 1 Fig. 1 Schematic view of plant m6A editors (PMEs).(A-B) Schematic diagram of the construct for PMEs in Arabidopsis.ALKBH-CD represents human m6A demethylase ALKBH5 catalytic domain (66-292 aa); dLwaCas13a-msfGFP represents inactive dLwaCas13a (R474A and R1046A)-msfGFP; the XTEN linker separates dLwaCas13a-msfGFP and ALKBH-CD.dLwaCas13a-msfGFP-ALKBH-CD construct with two conserved nuclear localization signals (NLS) was driven by CMV 35 S promoter.RNA polymerase III promoter (AtU6) was employed to express crRNAs.For PME-WS-H and PME-FSS-H, double ribozyme system was used for crRNAs processing.DR indicates direct repeat and T indicates target for PMEs.(C) A self-cleaving double ribozyme system for precise processing of mature crRNAs.The upper graph is the predicted secondary structures o of the pre-crRNA, containing a Hammerhead ribozyme at the 5′-end (light yellow background), the sequence-specific crRNA portion in the middle (light blue background), and a HDV ribozyme at the 3′-end (light yellow background).The mature crRNA is released from the pre-crRNA through self-catalyzed processing.The mature crRNA contains direct repeat (universal for all crRNAs) and spacer (complementary to the target sequences) Fig. 2 Fig. 2 Design and construction of PMEs inducing targeted demethylation of SPL9, SPL3, FT, WUS, and STM mRNAs.(A) Schematic representation of positions of the m6A sites and regions targeted by crRNAs.The CDS and UTR are represented by black and gray boxes.Yellow bars represent the m6A sites.Red arrow indicates the crRNA.The crRNA was designed to target sequence near the m6A sites.Black arrows indicate the primers for SELECT-qPCR analysis.(B) Schematic illustration of PME-MCS, PME-WS, PME-FSS, PME-WS-H, and PME-FSS-H. Fig. 3 Fig. 3 PMEs induces multiple demethylations of mRNAs.(A-E)qPCR analysis of the abundance of target mRNAs in T 3 transgenic seedlings.TUB2 was used as internal control.Error bars show SD (n = 3).Different letters at the top of each column indicate a significant difference at p < 0.05 determined by the Tukey test.(F-H) SELECT-qPCR analysis of the m6A level of target mRNAs in plants transfected with PME-MCS, PME-FSS, and PME-FSS-H, respectively.Error bars show SD (n = 3).Different letters at the top of each column indicate a significant difference at p < 0.05 5 U SplintR ligase (NEB, USA) and 10 nM ATP (NEB, USA), at 40 ºC for 20 min, and then denatured at 80 ºC for 20 min. 2 µL of the final reaction mixture was used for SELECT -qPCR reaction.The PCR reaction cycle program was as follows: 95 ºC, 5 min; 95 ºC, 10 s then 60 ºC, 35 s for 40 cycles; 95 ºC, 15 s; 60 ºC, 1 min; 95 ºC, 15 s; 4 ºC, hold.Primers for SELECT-qPCR are listed in Additional file 8.All assays were performed with three independent experiments.
4,193
2023-08-09T00:00:00.000
[ "Biology" ]
Milk Composition of Free-Ranging Impala (Aepyceros melampus) and Tsessebe (Damaliscus lunatus lunatus), and Comparison with Other African Bovidae Simple Summary Until now, the milk composition of impala and tsessebe has been unknown. Our study showed that the composition of impala milk was 5.56 ± 1.96% fat, 6.60 ± 0.51% protein, and 4.36 ± 0.94% lactose, and that of tsessebe milk was 8.44 ± 3.19%, 5.15 ± 0.49%, and 6.10 ± 3.85%, respectively. The fatty acid composition and protein properties also differed. The data of these two species were subjected to an interspecies comparison with 13 other antelope species by statistical methods. This showed that the milk of tsessebe is similar to that of its relatives of the Alcelaphinae sub-family. Although the impala is a close relative of the Alcelaphinae, its milk composition finds comparison with a different sub-class, the Hippotraginae. The information contributes to the phylogenetic properties of milk and milk evolution. Abstract The major nutrient and fatty acid composition of the milk of impala and tsessebe is reported and compared with other Bovidae and species. The proximate composition of impala milk was 5.56 ± 1.96% fat, 6.60 ± 0.51% protein, and 4.36 ± 0.94% lactose, and that of tsessebe milk was 8.44 ± 3.19%, 5.15 ± 0.49%, and 6.10 ± 3.85%, respectively. The high protein content of impala milk accounted for 42% of gross energy, which is typical for African Bovids that use a “hider” postnatal care system, compared to the 25% of the tsessebe, a “follower”. Electrophoresis showed that the molecular size and surface charge of the tsessebe caseins resembled that of other Alcelaphinae members, while that of the impala resembled that of Hippotraginae. The milk composition of these two species was compared by statistical methods with 13 other species representing eight suborders, families, or subfamilies of African Artiodactyla. This showed that the tsessebe milk resembled that of four other species of the Alcelaphinae sub-family and that the milk of this sub-family differs from other Artiodactyla by its specific margins of nutrient contents and milk fat with a high content of medium-length fatty acids (C8–C12) above 17% of the total fatty acids. Introduction Insight into the nutritional and biochemical properties of milk synthesis is difficult to investigate in a single species. However, this may be overcome by comparative studies between species. Milk of ruminants, especially the commercially exploited animals, has been studied extensively. These include the cow (Bos taurus), water buffalo (Bubalus bubalis), yak (Bos grunniens), sheep (Ovis aries), goat (Capra hircus), camel (Camelus bactrianus) [1][2][3][4], became available, we were able to extend the comparison by a statistical approach and described phylogenetic differences between several taxa of African ruminants [12]. Impala (Aepyceros melampus) occur in the eastern woodland parts of Africa from northern Kenya south to the KwaZulu-Natal region of South Africa, extending westwards to the extreme southern parts of Angola. They both graze and browse, depending on availability of food and season. In the region under study, the lambs are born during December and January, within four to five weeks. The young are hidden for a day or two before joining their mother with the herd. The nursing period is five weeks [34]. Tsessebe (Damaliscus lunatus lunatus) have a wide, scattered, and discontinuous distribution from Senegal to eastern Ethiopia and southwards to the Mpumalanga region of South Africa. They are exclusively grazers. In the southern distribution range, the calves are born in October and November. Similar to the other Alcelaphinae species, tsessebe young immediately follow the mother, grazing starts at two months of age, while nursing is continued to eight months [34]. In the current study, the milk composition of impala and tsessebe is reported for the first time. This is compared with data of closely related species, as well as 13 other species from 8 African Artiodactyla subfamilies by a statistical comparison, specifically with regard to the lactose, oligosaccharides, proteins, non-protein nitrogen (NPN), fat, and fatty acids, in order to describe phylogenetic differences of milk composition between families and sub-families of Artiodactyla. Animals and Sample Collection Milk was obtained from 4 impala of the Koppies Dam Nature Reserve, and 6 impala of the farm Helpmekaar, district Ventersburg, Free State Province. The impalas sampled at Koppies Dam were approximately 10 days postpartum, and those of Helpmekaar at 3 and 7 weeks after the first lambs were born. All lambs were born within a 5-week period. The lactation stage of the tsessebe was approximately 3-5 months. Milk from 3 tsessebe was obtained from the farm Holhoek, district Standerton, Mphumalanga Province. The animals roamed on natural vegetation. The animals were sedated for management purposes, with M99. Milk was obtained by palpation of the teats with sustained pressure on the udder. Milk-letting agents were not administered. Teats were milked out to obtain representative samples, producing 0.5-15 mL. Milk from separate teats of impala was obtained, while pooled samples were collected from the tsessebe. Milk was held on ice while in the field, frozen within 2 h, and kept frozen until analyzed. The milk was thawed in a water bath at 39 • C and mixed by swirling in preparation for analysis. Determination Water Content Water content was determined by gravimetry. Approximately 200 µL milk was weighed, dried in a forced convection drying oven for 2-3 h at 100 • C, and re-weighed [35]. Protein Analysis The crude protein (CP) content of approximately 100 µg milk was determined by the Dumas method using the Dumas method [36]. A conversion factor of 6.38 was used to convert nitrogen (N) content to protein content. NPN (non-protein nitrogen) and whey proteins were obtained by selective precipitation with trichloroacetic acid or acidification with hydrochloric acid according to the method of Igarashi [37], and the nitrogen and protein content of each determined as above. Milk proteins were separated by electrophoresis on a Mighty Small miniature slab gel electrophoresis unit SE 260 (Hoefer Scientific Instruments. Holliston, MA, USA). Milk samples were diluted 1:10 with stacking gel buffer that contained 2-5% sucrose, with bromophenol blue as tracking dye. Sample volumes of 5 µL milk were loaded in the wells in the slab gel. Identification of protein bands was based on comparative electrophoretic mobility of the major proteins from cow and sheep on Urea-PAGE. Lipid Analysis Quantitative extraction of total fat was performed with chloroform and methanol in a ratio of 2:1 (v/v) [38]. Total extractable fat content was determined by weight and expressed as gram fat/100 g milk. Transesterification of fatty acids to form methyl esters (FAME) was performed with 0.5 N NaOH in methanol and 14% boron trifluoride in methanol [39]. The FAME were quantified with a Varian 430 GC, with a flame ionization detector and a fused silica capillary column, Chrompack CPSIL 88 (100 m length, 0.25 mm ID, 0.2 µm film thickness), and column temperature of 40-230 • C (hold 2 min; 4 • C/min; hold 10 min). The FAME in hexane (1 µL) was injected into the column by a Varian 4800 Autosampler, (Varian Inc. Walnut Creek, CA, USA) with a split ratio of 100:1. The injection port and detector were maintained at 250 • C. Hydrogen was used as carrier gas at 45 psi with nitrogen as makeup gas. Chromatograms were recorded by Varian Star Chromatography Software (Version 6.41). Hendecanoic acid (C11:0) was used as internal standard, after it was established that it was not detected in the samples under study. Identification of FAME was by comparison of the relative retention times of FAME peaks from samples with those of standards obtained from Supelco (Supelco 37 Component FAME Mix 47885-U with addition of C18:1c7, C18:2c9t11, C19:0, C22:5). Carbohydrate Analysis Carbohydrates were determined by high-performance liquid chromatography with a Waters Breeze system with Biorad Aminex 42C (300 × 7.8) mm (Pall Life Sciences, Ann Arbor, MI, USA) and Waters Sugar Pak 1 (300 × 7.8) mm (Microsep, Johannesburg, South Africa) columns at 84 • C with a differential refractive detector. The mobile phase was de-ionized water eluted at 0.6 mL/min. Samples were de-fatted and de-proteinized with Ultrafree-CL (UFC4 LCC 25) filter devices (Millipore, Merck, Johannesburg, South Africa) centrifuged at 3000× g. Samples of 10 µL were injected and quantified with maltotriose, lactose, glucose, and galactose as standards. Statistical Analysis For a phylogenetic comparison of the two species under study with other African Artiodactyla, we incorporated the data of springbok, blue wildebeest, black wildebeest, blesbok, red hartebeest, sable antelope, mountain reedbuck, indigenous African cattle, African buffalo, gemsbok, eland, kudu, and giraffe [6,7,[10][11][12]20]. Where necessary, previously published data were re-calculated to the same units used here, i.e., g/100 g milk for nutrients and percentage of total fatty acids. For sable antelope, springbok, gemsbok, and giraffe, respectively 2, 4, 10, and 30 additional milk samples were analyzed since publication, which were included in the current study to increase the representative numbers. In these cases, the animals roamed on locations that differed from that of the animals described in the publications. The diet might possibly have been affected by differences in vegetation types of the respective areas. The potential effect on milk composition is incorporated in the discussion. Significant differences between means among species were determined by analysis of variance (ANOVA) and multiple comparisons between species by the Tukey-Kramer test at α = 0.05 [41]. Principal component analysis (PCA) was used to visualize variables in a two-dimensional space by Varimax rotation. Hierarchial clustering was used to perform phylogenetic comparisons and to construct dendrograms [41]. Twenty-four fatty acids were included in the statistical analysis. Those not detected (ND) or that only occurred in individual animals at less than 0.1% (C11:0, C15:1c10, C18:3c6,9,12, C21:0, C23:0, C24:0, and all the unsaturated fatty acids of C20-C24 length) were excluded. Results The milk proximate composition of the impalas and tsessebes is shown in Table 1 and the milk fatty acid composition in Table 2. The proximate composition of bovine [1,42], ovine [43], caprine [44], and impala milk [45] is included for comparison. The nutrient composition of impala milk obtained here is very different compared to that reported for the latter [45]. It should be mentioned that the performance of the protein analysis and fat extraction methods applied in the current study were shown by others to produce erroneous results compared to reference methods, specifically in the analysis of milk with a high fat and oligosaccharide content, such as from marine mammals [46]. The milk fat and oligosaccharide content of impala and tsessebe are in the same order of magnitude as in cow's milk. It was therefore assumed that the analytical techniques were not affected to the same extent as for that of marine mammals. Our own coefficients of variance (cv) for the analysis methods of these parameters in cow's milk (n = 11) were 2.73% for fat, 2.15% for nitrogen analysis, 2.31% for lactose, and 2.15-3.17% for the individual fatty acids. In Figure 1, the electrophoretograms of all three tsessebe and two impala as representatives of the total of 20 animals, are shown. The protein bands of the antelope milk showed similar migration sequences to the proteins of the domestic species, cow, and sheep, however, at different distances of migration. Because the proteins of the two antelope species were run on separate gels with either cow or sheep milk, we carried out interpretation by comparison of calculated Rf values (migration distance) of the caseins relative to the bovine and ovine proteins. Electrophoretograms of earlier studies were consulted for this purpose [6,7,[10][11][12]20]. Only the caseins were compared, because the presence of the whey proteins may sometimes be too low to be visible as distinct electrophoretic bands. On the basis of the Rf values, we found that the migration distances in Figure 1 were very close to what would be observed if all were run on a single gel. Proteins The 5.45 ± 0.49% protein content of the tsessebe milk is comparable with that of other ruminants [12,32], while the 6.60 ± 0.51 of the impala is on the high side of the scale, which has been connected with antelope that hide their offspring for extended periods during the day [8]. The whey to casein ratios of impala milk proteins was 1:3.5. That that of the tsessebe was 1:8.2, and was comparable to that of other Alcelaphinae species, the wildebeest and specifically the blesbok [7], but not with that of the red hartebeest [12]. Comparison of the protein bands, specifically the caseins, of the impala and tsessebe with each other, as well as other ruminants, was performed by Rf values of the electrophoretic bands taken from earlier work. In Figure 1, it can be seen that the κ-and β-caseins of the tsessebe migrated further than those of the ovine milk, and according to the Rf values, very similar to that of cow's milk. The α-caseins migrated further than the ovine but shorter than the bovine equivalent. Comparison with earlier work showed that the caseins of the tsessebe migrated at almost equal distances to those of the red hartebeest [12], blesbok, and black and blue wildebeest [7]. This indicated that the sizes and surface charges of the milk proteins of the Alcelaphinae family were conserved. The κand β-caseins of the impala migrated shorter than the bovine and ovine equivalents, according to the Rf values. The α-caseins of impala migrated further than the ovine but shorter than the bovine equivalent. Comparison with earlier work showed that the caseins of impala showed almost equal migration distances to that of the sable antelope [11], gemsbok, and scimitar oryx [9] of the Hippotraginae family. Although the milk proteins of only a few Bovidae families, specifically the Antilopinae [12], Bovinae [6], Giraffidae, Reduncinae [12], and Caprinae (goat and sheep as references) had been investigated, distinct electrophoretic patterns were observed for each family. The closest relatives to the impala are the Hippotraginae and Alcelaphinae families [47]. It is interesting that the milk proteins of these two families, according to electrophoretic migration, are very different, and that the electrophoretic properties of impala milk proteins should resemble that of the Hippotraginae. Carbohydrates The lactose content of 4.36 ± 0.94% of impala is similar to that of the domesticated species as well as other ruminants, while the 6.10 ± 3.85 of tsessebe milk seems to be higher than that of most ruminants [12,32]. No monosaccharides were observed in im- Proteins The 5.45 ± 0.49% protein content of the tsessebe milk is comparable with that of other ruminants [12,32], while the 6.60 ± 0.51 of the impala is on the high side of the scale, which has been connected with antelope that hide their offspring for extended periods during the day [8]. The whey to casein ratios of impala milk proteins was 1:3.5. That that of the tsessebe was 1:8.2, and was comparable to that of other Alcelaphinae species, the wildebeest and specifically the blesbok [7], but not with that of the red hartebeest [12]. Comparison of the protein bands, specifically the caseins, of the impala and tsessebe with each other, as well as other ruminants, was performed by Rf values of the electrophoretic bands taken from earlier work. In Figure 1, it can be seen that the κand β-caseins of the tsessebe migrated further than those of the ovine milk, and according to the Rf values, very similar to that of cow's milk. The α-caseins migrated further than the ovine but shorter than the bovine equivalent. Comparison with earlier work showed that the caseins of the tsessebe migrated at almost equal distances to those of the red hartebeest [12], blesbok, and black and blue wildebeest [7]. This indicated that the sizes and surface charges of the milk proteins of the Alcelaphinae family were conserved. The κand β-caseins of the impala migrated shorter than the bovine and ovine equivalents, according to the Rf values. The α-caseins of impala migrated further than the ovine but shorter than the bovine equivalent. Comparison with earlier work showed that the caseins of impala showed almost equal migration distances to that of the sable antelope [11], gemsbok, and scimitar oryx [9] of the Hippotraginae family. Although the milk proteins of only a few Bovidae families, specifically the Antilopinae [12], Bovinae [6], Giraffidae, Reduncinae [12], and Caprinae (goat and sheep as references) had been investigated, distinct electrophoretic patterns were observed for each family. The closest relatives to the impala are the Hippotraginae and Alcelaphinae families [47]. It is interesting that the milk proteins of these two families, according to electrophoretic migration, are very different, and that the electrophoretic properties of impala milk proteins should resemble that of the Hippotraginae. Carbohydrates The lactose content of 4.36 ± 0.94% of impala is similar to that of the domesticated species as well as other ruminants, while the 6.10 ± 3.85 of tsessebe milk seems to be higher than that of most ruminants [12,32]. No monosaccharides were observed in impala milk, while approximately 0.08% glucose was observed in tsessebe milk. The presence of glucose was also noted in the milk of other Alcelaphinae species, the wildebeest and specifically the blesbok [7], but not in that of the red hartebeest [12]. Lipids The fat content of 8.44 ± 3.13 of the tsessebe milk is high and comparable with the 6-12% of other Alcelaphinae [7,12]. The fat content of 5.56 ± 1.96% of the impala milk is low when compared to other ruminants in general [31], and African antelope specifically [12]. The lipid fractions of the milk of tsessebe (Table 2) contain high amounts of saturated fatty acids (SFA), 80.24 ± 3.89%, mainly due to the high content of the fatty acids of C6-C12 length. It contains 26.9% C6-C12 length combined, comparable to that of its Alcelaphinae cousins, the black and blue wildebeest and blesbok [7]. The milk fat of the impala contains 71.31 ± 2.584% SFA and only 5.16% of the saturated fatty acids of C6-C12 length combined, which brings it in line with that of the springbok (Antelopinae) [10] and the Bovinae, cow [20], African buffalo [6], kudu, and eland [9]. The milk fatty acid content of mammals is dependent on the dietary content thereof. In ruminants, other nutritients, specifically roughage, may also play a role [48][49][50]. The impala in this study lived on Dry Cymbopogon-Themeda veld and the tsessebe on Soweto Highland Grassland (a sub-type of Cymbopogon-Themeda veld) [51,52]. The impala is both browser and grazer, depending on availability, and Cynodon dactylon and Themeda triandra form a large part of its diet [34]. Nutritional information on the tsessebe is only available for its natural range and not for the grassland region under study [34]. However, since Soweto Highland Grassland also contains a high proportion of Themeda triandra and Cynodon dactylon [51,52], and these are of the major grass species consumed by other Alcelenaphinae species [34], it may be assumed that the tsessebe also consume them. Chilliard et al. [48] reported that the type of grass had no nominal effect on cow's milk fatty acid composition, probably because the fatty acid composition does not vary much between grass types [48,50]. Freshness and maturity of grass was shown to have a small effect, specifically on content of conjugated linoleic acid, while drastic changes in milk fatty acid composition was shown to depend on dietary supplementation with seeds, such as linseed or maize, because of the higher fat content [53,54]. Foliage as fodder supplement serves to increase the protein content [48,49]. It may therefore be accepted that the diet of the free ranging impala and tsessebe would not affect milk fatty acid composition to a great extent because they grazed on the same grassland as the species they are compared with. Energy Milk energy density is an indication of the ecology and life-history strategy of a species. Extended periods between nursing may cause milk to be highly concentrated. The mechanism is due to a downregulation of the lactose synthesis from infrequent infant suckling, which results in milk with a high fat and low sugar and protein content, which conserves water, glucose, and protein for the mother [2]. Petzinger et al. [8] showed that a "hider" strategy is observed in bovids for which the milk protein supplies a high proportion (33%) of milk gross energy, such as the tribe Tragelaphini [9]. The impala hide their offspring for the first few days and then stay in the herd, not always close to the mother. During this time, they suckle only a few times per day [34]. The tsessebe offspring follow the mother shortly after birth [34]. The percentage gross energy provided by the milk proteins is approximately 39% of the total energy for the impala and 25.0% for the tsessebe, which is in agreement with the "hider" strategy proposed by Petzinger et al. [8]. The statistical analyses were carried out in a progressive way per species, by ANOVA, as was done in earlier work [9,20], as well as with PCAs and a dendrogram [12]. The PCA of fat, protein, NPN, lactose, and oligosaccharides; PCA of SFA, MUFA, and PUFA; PCA of content of fatty acids; and PCA of all the components combined were similar to that observed previously with regards to the clustering of the data of the species according to their taxa. A clear separation of the Bovinae from the other families-Alcelaphinae, Antilopinae, Bovinae, Giraffidae, Hippotraginae, and Reduncinae-was noted [12]. Therefore, in the current study, the data of impala and tsessebe milk were compared with that of the other species in a dendrogram only ( Figure 2). The statistical analyses were carried out in a progressive way per species, by ANOVA, as was done in earlier work [9,20], as well as with PCAs and a dendrogram [12]. The PCA of fat, protein, NPN, lactose, and oligosaccharides; PCA of SFA, MUFA, and PUFA; PCA of content of fatty acids; and PCA of all the components combined were similar to that observed previously with regards to the clustering of the data of the species according to their taxa. A clear separation of the Bovinae from the other families-Alcelaphinae, Antilopinae, Bovinae, Giraffidae, Hippotraginae, and Reduncinae-was noted [12]. Therefore, in the current study, the data of impala and tsessebe milk were compared with that of the other species in a dendrogram only ( Figure 2). In the dendrogram, the ruminants were divided into two main clusters at a Euclidian distance of 18. Broadly, the cluster on the right was characterized by milk with a high fat content of 6-12%, lactose content of 3-6%, and protein content of 4-6%. The fat consists of a high content of saturated fatty acids (>75% of total fatty acids) and combined medium-chain fatty acid (8-14 carbons) above 25%, and less than 25% 18 carbon-length fatty acids. Within this cluster, the blue and black wildebeest form a smaller cluster at a In the dendrogram, the ruminants were divided into two main clusters at a Euclidian distance of 18. Broadly, the cluster on the right was characterized by milk with a high fat content of 6-12%, lactose content of 3-6%, and protein content of 4-6%. The fat consists of a high content of saturated fatty acids (>75% of total fatty acids) and combined medium-chain fatty acid (8-14 carbons) above 25%, and less than 25% 18 carbon-length fatty acids. Within this cluster, the blue and black wildebeest form a smaller cluster at a Euclidian distance of 6. Their milk contains a lower protein content and higher fat content compared to the tsessebe, blesbok, and red hartebeest, but a similar fatty acid composition. It should be noted that the new milk data placed the tsessebe in the Alcelaphinae cluster together with the four cousin species that were shown to cluster in previous work [12]. The tesessebe milk was very similar to that of the other Alcelaphinae in general, and to that of the blesbok specifically. The milk composition of the whole subfamily differs from most other ruminants. The milk composition of the species in the cluster on the left could be defined by 4-15% fat, 3-6% lactose, and 3-9% protein content. The contents overlap with that of the Alcelaphinae, however, the exchange amongst the three macronutrients, i.e., low lactose or protein content for high fat, is within larger margins than observed for the Alcelaphinae. This cluster is also distinguished by less than 75% saturated fatty acids, and the medium-chain fatty acids are exchanged for long-chain acids, specifically those of 16-18 carbon lengths and their unsaturated forms. Other extremes are also observed in this cluster, such as the springbok milk having the highest fat content of approximately 14.5% [10] and the mountain reedbuck with the lowest lactose content of 3.4% [12]. Although the milk of Bovinae representatives differ with regard to protein and fat contents, at least three of them-indigenous African cattle, eland, and African buffalo-are grouped together due to a similar lactose content of approximately 4%, low saturated fatty acid composition of around 64%, as well as a general similarity of fatty acid composition [6,9,20]. A finer grouping within this cluster is not very accurate, because it lies below the truncation line at a Euclidian distance of 4.18. This may imply that individual uniqueness of composition may play a larger role in describing the milk nutrient composition of each species. Conclusions The milk composition of impala and tsessebe was described. By statistical comparison, these analyses showed that the milk composition of the Alcelaphinae differs from species of seven other taxonomic groups of African Artiodactyla. Although the impala is a close relative of the Alcelaphinae, its milk composition is not comparable with their members. The phylogenetic differences could, in part, be described by biochemical and genetic properties, specifically regarding the synthesis of fatty acids. Phylogenetic properties are very complex and are therefore not restricted to the macro-nutrients. Milk data from more species, as well as representatives of other taxonomic groups, together with additional nutrient parameters, will be needed to refine the phylogenetic relationship of ruminant milk.
6,010.2
2021-02-01T00:00:00.000
[ "Agricultural and Food Sciences", "Biology" ]
Incremental Discriminant Analysis on Interval-Valued Parameters for Emitter Identification Emitter identification has been widely recognized as one crucial issue for communication, electronic reconnaissance, and radar intelligence analysis. However, the measurements of emitter signal parameters typically take the form of uncertain intervals rather than precise values. In addition, the measurements are generally accumulated dynamically and continuously. As a result, one imminent task has become how to carry out discriminant analysis of interval-valued parameters incrementally for emitter identification. Existing machine learning approaches for interval-valued data analysis are unfit for this purpose as they generally assume a uniform distribution and are usually restricted to static data analysis. To address the above problems, we bring forward an incremental discriminant analysis method on interval-valued parameters (IDAIP) for emitter identification. Extensive experiments on both synthetic and real-life data sets have validated the efficiency and effectiveness of our method. Introduction It is widely recognized that emitter identification is indispensable for communication, electronic reconnaissance, and radar intelligence analysis.No doubt, class discriminant analysis of emitter signal parameters has played a big role in emitter identification.For instance, emitter types could be inferred upon the discriminating signal parameters.The emitter working modes, detection range, angle resolution, and Doppler measurement of the target could be estimated as well according to the collected measurements of discriminative signal parameters.As can be seen, discriminant analysis of emitter signal parameters is crucial for both civil and military applications. However, the measurements of emitter signal parameters are typically characterised by uncertainty and continuous growth.These two problems pose great challenges for class discriminant analysis of emitter signal parameters. Firstly, the parameter measurement typically takes the form of uncertain intervals.Such uncertainty can result from the unstable working status of transmitter circuit, environmental noises, or other unknown interference sources.Secondly, the interval-valued signal parameter measurements are being accumulated dynamically and continuously.In practical applications, the amount of received measurements from various kinds of emitters could be explosive.According to the conservative estimation, when the channel width is 1 GHz, the sampling rate is 2.5 GHz and each sample allocated two bytes of storage, the amount of emitter signal parameter measurements received per hour could be up to 18 T, and the volume per day could approach 432 T. Unfortunately, few machine learning methods are fit for incremental interval-valued emitter signal parameter discriminant analysis as they either deal with precise-valued data only or process interval-valued data under an assumption of uniform distribution.Actually, the uniform assumption does not hold for emitter signal parameters.The distribution of emitter signal parameters is usually assumed to be approximately normal instead.Therefore, the imminent challenge is how to perform the discriminant analysis using the interval-valued signal parameters complying with the normal distribution incrementally. An example of interval-valued emitter data set is illustrated in Figure 1.In data set , each observation has one interval-valued parameter measurement and each interval-valued measurement complies with a certain normal Inspired by the above problems, we bring forward an incremental discriminant analysis method on interval-valued parameters (IDAIP) for emitter identification.The emitter signal parameters include radio frequency (RF), pulse repetitive interval (PRI), pulse amplitude (PA), pulse width (PW), and so on.Our IDAIP method is not only robust to the uncertainty of interval-valued parameter measurements but also able to carry out the emitter parameter analysis incrementally for emitter identification.To the best of our knowledge, little effort has been made in incremental interval-valued emitter parameter analysis yet.Experimental results validate the efficiency and effectiveness of our IDAIP method for potential applications in communication, electronic reconnaissance, and radar intelligence analysis. The rest of the paper is organized as follows.We briefly review related work in interval-valued data analysis in Section 2. Our IDAIP method is formally proposed in Section 3. In Section 4, we present the experimental results.And we conclude in Section 5. Related Work Quite a large number of machine learning methods have been put forward to address the uncertainty of interval-valued data.For example, symbolic data analysis [1,2] has been proposed to extend the classical data models to take into account the interval-valued information.The representatives interval-valued data analysis approaches include point value replacement [3][4][5], p-Box [6][7][8][9][10][11], and Hausdorff distance methods [12,13]. The point value replacement approach replaces the interval values by precise values, such as taking the middle points or ranges of intervals [3][4][5].In this way, they transfer the interval-valued data into the classical point-valued data.These approaches fit a linear regression model to the midpoints, lower, or upper bounds of the interval and then apply the model to independent symbolic intervals.The model optimization principles are generally the minimization of mid-point, lower bound, or upper bound or the combination errors. Alternatively, the p-Box approaches describe the uncertainty over an interval-valued variable by a pair of lower and upper cumulative probability distributions [6].It is recognized that p-Boxes are one of the simplest and most popular models which directly extend the cumulative distributions in the precise case or simply derived from small samples [7] and expert opinions.Due to the simplicity, the p-Box methods have been widely used in many applications, such as estimation of future climate change [8], engineering design [9], soil screening level estimation [10], and reliability analysis [11].However, p-Box methods require that some characteristic values are known in advance, such as the mode, mean, and other fractiles of the distributions, while these values are unknown in case of the normal-distributed interval-valued emitter parameter measurements.As a result, the p-Box approaches are unfit for class discriminant analysis. Hausdorff distance approach is assumed to be a natural way to compare the dissimilarity between interval-valued data [12,13].Other distance measures for interval-valued data have been applied as well, such as Euclidean distance [14], taxi distance [15], Mahalanobis distance, and the Wasserstein distance [16].However, these distance metrics are all unsuitable for uncovering the delicate nature of normal class distributions. Existing interval-valued data analysis approaches typically assume that observations are independent with each other and the variable values are uniformly distributed in the interval.However, this is not true for emitter signal parameters which comply approximately with a normal distribution.Existing approaches are generally restricted to the equidistribution hypothesis.In addition, existing methods are constrained for static data and have no incremental learning ability.Though a fuzzy set based incremental learning algorithms on interval variables [17] has been explored it is assumed that delicate prior knowledge about fuzzy set definition is available, which is actually not the case in the practical application of emitter identification. Method In this section, we formally present our IDAIP method. Suppose the interval-valued emitter data set is composed of a set of continuously accumulating observations.The current number of observations, signal parameters, and emitter types are denoted as , , and , respectively.Assume there are () number of observations in each emitter type , 1 ≤ ≤ .And we assume that each observation , 1 ≤ ≤ , is consisted of number of interval-valued parameter measurements, { } 1≤≤ , an associated emitter type , and a time stamp indicating the collection time.We denote the set of observations within the same emitter type as Ω . Each interval-valued measurement = [ , ] is consisted of a lower bound and an upper bound .The lower bound and upper bound correspond to the minimum and maximum measurement, respectively, among independent measurements of parameter from observation .Each of the independent measurements of signal parameter from observation , ℎ (1 ≤ ℎ ≤ ), is assumed to comply with the same interval normal distribution N( , 2 ).We also assume that mean values from observations in the same class , , where ∈ Ω and 1 ≤ ≤ , comply with the same class normal distribution ( , 2 ) and that ≈ .Consider Given the current interval-valued emitter data set , our IDAIP method is consisted of four major steps: (3) Class Discriminant Analysis: evaluate the class discriminating power of each signal parameter between each emitter type pair - V . (4) Incremental Learning: implement the incremental class discriminant parameter analysis. Interval Distribution Estimation. As discussed above, traditional symbolic interval data analysis typically assumes that the measurements in an interval are uniformly distributed.However, the true measurement distributions of emitter signal parameters are assumed to comply with a normal distribution instead.The difference between the traditional uniform assumption and our normal distribution assumption is illustrated in Figure 2. Given an interval-valued measurement = [ , ] of signal parameter from observation , we assume the corresponding parameter measurements { ℎ } 1≤ℎ≤ follow a certain normal distribution, ℎ ∼ N( , 2 ), where and correspond to the minimum and maximum value within { ℎ } 1≤ℎ≤ .Then we can estimate the interval distribution N( , 2 ) by Lemma 1. Lemma 1. Assume an interval-valued measurement ), 1 ≤ ℎ ≤ , and that and correspond to the minimum and maximum value within { ℎ } 1≤ℎ≤ ; then we can infer that ) . ( Proof. (1) μ could be inferred according to the minimum square error (MSE) criteria. (2) Under the above normal distribution assumption, the lower bound measurement in interval corresponds to the smallest order statistic while the upper bound measurement corresponds to the largest order statistic.The order statistics of standard normal random variables have been approximated [18].One approximation for the th highest order statistic out of is given as where ) and it is recommended that = 0.375.Therefore, given the interval-valued measurement = [ , ] of signal parameter from observation , we have where and are the mean value and standard deviation of the normal distribution for signal parameter Therefore, the conclusion holds. Class Distribution Inference.We assume that, in the ideal case, the standard deviations of interval distributions and the associated class distribution are the same; ≈ under the condition that ∈ Ω .We also assume that the interval means comply with the underlying normal class distribution, ∼ N( , 2 ), could be inferred as below: ( Proof.The conclusion could be inferred according to the minimum square error (MSE) criteria. Class Discriminant We denote the intersection set * as the set of intersected points between the class distribution curves N( , 2 ) and N( V , 2 V ) for emitter type and V , respectively, on signal parameter , | * | = 0, 1, 2 or ∞.Then, the mutual classification error V , the probability that observations of emitter type are misclassified into emitter type V according to signal parameter , can be modified from [19] and classified into the following three cases: (1) | * | = 0: emitter types and V could be discriminated perfectly where is the misclassification rate lower bound between classes for any signal parameter, indicating emitter classes and V could be discriminated perfectly. (2) | * | = ∞: emitter types and V are overlapping completely where is the misclassification rate upper bound between classes for any signal parameter, indicating emitter classes and V are overlapping with each other.(3) | * | = 1 or 2: emitter types and V are overlapping partially In addition, the maximum mutual classification error V between emitter type and V on signal parameter could be defined as Following the definition given by Shannon and Hartley [20,21], the representation of the maximum mutual classification error information V can be expressed in terms of a logarithmic scale of base 10 as Likewise, the minimum information required for discriminating between two emitter classes and V for signal parameter is given by If V ≥ 0 , the two emitter classes and V can be discriminated by signal parameter with the classification error smaller than a predefined threshold 0 .In that case, signal parameter is assumed to be discriminating for type pair - V .The results of class discriminant analysis are stored in an upper triangular discriminating power matrix dp.The element dp[, V] ( < V) of matrix dp would be specified as the index of the most discriminating signal parameter for the pair of emitter types - V , as defined below: 3.4.Incremental Learning.For the interval-valued signal parameters, such as radio frequency (RF), pulse repetitive interval (PRI), pulse width (PW), and pulse amplitude (PA), we define the data description model as a mean value matrix Σ , a variation matrix Σ 2 , and a class distribution vector accordingly as follows. Definition 3 (mean value matrix Σ ).One defines each element of the two-dimensional mean value matrix Σ (, ) as the sum of estimated mean values of signal parameter from all the observation in type , where 1 ≤ ≤ and 1 ≤ ≤ .Mathematically speaking, Definition 4 (variation matrix Σ 2 ).One defines each element of variation matrix Σ 2 (, ) as the sum of estimated variations of signal parameter on all the observation in emitter type , where 1 ≤ ≤ and 1 ≤ ≤ .Mathematically speaking, Based on the above two definitions and Lemma 2, the mean value and standard deviation of the normal distribution for each emitter type and each signal parameter could be formalized as Similarly, the intersection set * , the mutual classification error V , the maximum mutual classification error V , the maximum mutual classification error information V , and the discriminating power matrix dp could be updated in line. In the incremental interval-valued parameter analysis process, once some new observations at time are collected, class distribution vector , mean value matrix Σ , and variation matrix Σ 2 would be updated sequentially.And upon the updated matrices and vector, the discriminating power matrix dp would be updated afterwards as well.The outline of our IDAIP method is illustrated in Algorithm 1. Results We evaluated our IDAIP method on a series of synthetic data sets and one real-life data set. In the synthetic data, the mean values of measurements for the same signal parameter from observations in the same emitter type all comply with the same normal distribution while those from observations in different emitter types may not.The standard deviations of measurements for the same parameter from observations in the same emitter type are always kept as the same.Each interval-valued parameter measurement was obtained by randomly generating five ( = 5) samples from the corresponding parameter normal distribution and selecting the minimum and maximum measurements as the lower and upper bound, respectively, to form an interval. This real-life emitter data set consisted of 120 observations from three different emitter classes, denoted as 1 , 2 , and 3 , respectively.Each emitter class has around 40 observations.Each observation was composed of eight interval-valued measurements.In addition, an independent test emitter data of 20 observations was provided to validate the emitter identification model constructed from the emitter training data. All the experiments were conducted on a Dell PC running Microsoft Windows XP with a Pentium dual-core CPU of 2.6 GHz and a 2 G RAM. Evaluation on Synthetic Data. We evaluated both the efficiency and effectiveness of our IDAIP method on the synthetic data sets.The effectiveness of our method was evaluated in terms of class distribution inference and class discriminant analysis. Evaluation of Efficiency. During the experiments, we compared the runtime of our IDAIP method versus the batch one without incremental learning. IDAIP method Input Parameters: (i) Σ : mean value matrix at time (ii) Σ 2 : variation matrix at time (iii) : class distribution vector at time (iv) dp: discriminating power matrix at time (v) ΔΩ : new observations from emitter type at time Output: (i) Σ : the updated mean value matrix at time (ii) Σ 2 : the updated variation matrix at time (iii) : the updated class distribution vector at time (iv) dp : the updated discriminating power matrix at time Begin (1) for each new observation ∈ ΔΩ at time 9) for each pair of emitter types - V do (10) for each signal parameter do (11) conduct the class discriminant analysis (12) if is more discriminating than that in dp(, V) then (13) update dp(, V) = ( , V V ) ( 14) end ( 15) end ( 16) end (17) output the updated Σ , Σ 2 , and dp. Firstly, we compared the efficiency of our method against the batch one when varying the number of observations of each emitter type, NumPerCls, between 10 to 100.The number of emitter types was set as 5, = 5, and the number of signal parameters was fixed at ten, = 10.During experiment, our method incrementally updates the mean value matrix, the variation matrix, and the class distribution vector once 10 new observations arrive and performs the class discriminant analysis based on the updated matrices while the batch method conducts the class discriminant analysis using the complete set of available observations.As can be seen from Figure 3(a), the runtime of incremental learning is approximately the same while that of the batch method increases linearly. Secondly, we varied the number of available emitter types, , from four to twelve while we fixed the number of observations per emitter type as 100 and the number of signal parameters as ten.Again, our method incrementally updated the three matrices once 100 observations from a new emitter type were collected.Our method is orders of magnitude faster than the batch method, as illustrated in Figure 3(b). Finally, we varied the number of signal parameters, , from nine to 18, while fixing the number of observations per emitter type as 100 and the number of emitter types as five.Our IDAIP method incrementally performed the class discriminant analysis when two new signal parameters were available.Again, our method is significantly more efficient than the batch method, as indicated in Figure 3(c). Evaluation of Class Distribution Inference. Given an estimated value of mean or standard deviation of a class normal distribution for a certain signal parameter from a certain emitter type, we define the absolute error as the absolute difference between the estimated value and the underlying true value.During experiments, we varied the number of observations per emitter type, NumPerCls, and calculated the corresponding absolute errors for each parameter from each emitter type.Under each parameter setting, we simulated the experiments 1000 times and generated a boxplot for the absolute errors.It turned out that the absolute errors of signal parameters from each emitter type all tend to converge to zero.For instance, the absolute errors of parameter 1 on emitter type 1 converge to zero with the increase of the number of observations per emitter type from 10 to 10, as shown in Figure 4(a).A similar trend could be observed in the boxplot of absolute errors of standard deviations of parameter 1 on emitter type 1 , as shown in Figure 4(b).Similar results could be obtained on other parameters.As can be seen, these results were rather consistent with Lemma 2. Evaluation of Class Discriminant Analysis. We simulated a series of interval-valued synthetic data sets with different number of signal parameters when varying number of observations per emitter type.The number of emitter types was fixed at ten during experiments.Under each signal parameter number setting, we simulated the synthetic data set one thousand times.We reported the average percentage of correctly identified discriminating signal parameters in the discriminating power matrix dp.Again, with the increase in the number of observations per emitter type, there is a rise in the average percentage of correctly identified discriminating signal parameters in the discriminating power matrix dp, as shown in Figure 5. Evaluation on Real-life Data.We evaluated our IDAIP method against the benchmark peers on a real-life emitter data set of signal parameter measurements as well.We evaluated our method against the benchmark class interval discriminant analysis method [22] and the point value replacement methods [3][4][5] in term of class discriminant analysis, emitter identification scalability, and accuracy.do, the point value replacement method simply picked the lower bounds and upper bounds which were treated equally for normal class distribution inference.For fair comparison, the associated maximum mutual classification error between each class pair − for parameter PRI, , was calculated as well for the point value replacement methods and our method.We applied a common discrimination threshold 0 = 0.15 for the three methods.The class pair − would be considered discriminable if the corresponding is below threshold 0 and undiscriminable otherwise. As can be seen from Figure 6(a), the three different emitter classes were unable to be discriminated under the uniform assumption, whose measurements are heavily overlapping with each other.For this reason, the class interval method was unable to discriminate any of the three class pairs.The point value replacement method, on the other hand, was able to discriminate one class pair 2 − 3 .Comparatively, our IDAIP method successfully discriminated between two class pairs, 1 − 2 and 2 − 3 , fairly well, as shown in Figure 6(b).The maximum mutual classification errors in class discriminant analysis of the three methods were illustrated in Figure 6(c).As can be observed, our method has outperformed the other two methods by always achieving the smallest maximum mutual classification error. Evaluation of Emitter Identification. To evaluate the emitter identification scalability and accuracy of our IDAIP method, we first designed a naive emitter identification approach after the incremental discriminant analysis.Given a test instance tt consisting of individual parameter measurements, its emitter type tt.type could be inferred as shown in tt.type = argmin where ( ) indicates the weight of each signal parameter , which could be calculated as below: where countdp( ) denotes the occurrence count of parameter in the discriminating power matrix dp. (i) Evaluation of Scalability.We compared our incremental emitter identification method based on incremental discriminant analysis against the benchmark point value replacement methods.Specifically, we transformed the original intervalvalued real-life data into the middle value and range format.The original size of the emitter data set was set as 30, each emitter type with 10 observations.Then, the size of emitter data was incrementally expanded to 60, 90, and 120.We compared the performance of our interval method against that of the benchmark point value methods, logistic regression, Multilayer Perceptron, Naive Bayes, SVM, NN, AdaBoostM1, and decision tree.As shown in Figure 7, our incremental emitter identification method was much more scalable than the benchmark ones.The majority of benchmark point value methods were either unable to finish running within five minutes or exited due to memory problems when the data size was beyond 60.Specifically, the runtime of NN and Multilayer Perceptron was beyond five minutes when the data size reached 30.And when the the data size reached 90, the decision tree method had a memory location problem and expired before it finished.So did the AdaBoostM1 method when the data size reached 120.Though the logistic regression, SVM, and Naive Bayes methods were linearly scalable with the increase in data size, they were not adaptable for incremental learning. (ii) Evaluation of Accuracy.For the above reason, we only report the emitter identification accuracy when the training size was below 60.Figure 8(a) compares the performance of our emitter identification approach against the benchmark point value methods when the training data size was 60 while Figure 8 be observed, our emitter identification approach has outperformed the benchmark point value methods significantly.This is because we have made a better use of the underlying normal parameter measurement distribution. Conclusion In this work, we have brought forward an incremental discriminant analysis method on interval-valued parameters for emitter identification (IDAIP) to address the problems of uncertainty in emitter signal parameter measurement and rapid growth in emitter data volumes.Our method is not only robust to the uncertainty of interval-valued parameter measurements but also able to carry out the emitter parameter analysis incrementally for emitter identification.Extensive experiments have indicated the efficiency and effectiveness of our method.The runtime of our method is approximately linear with respect to the number of newly arrived observations and the number of signal parameters.Our method has outperformed benchmark interval-valued machine learning methods under uniform distributions.These merits enable our IDAIP method to be applied promisingly for emitter identification in both military and civil applications. Figure 6 : Figure 6: Class discriminant analysis on real-life data set. Figure 8 : Figure 8: Comparison of emitter identification accuracy on real-life data set. 2 ), and the () interval-valued measurements in each emitter type are independent from each other.Then, upon the () estimated mean values and variances for signal parameters in emitter type , we can further infer the normal class distribution for each signal parameter in emitter type , N(μ , σ 2 ), as shown in (5), as proved by Lemma 2. If the () interval-valued measurements of signal parameter from emitter type , 1 , 2 , . . ., and () from observations 1 , 2 , . . ., and () ∈ Ω of emitter type satisfy that (1) 1 , 1 ∼ N( 1 , 2 ), 2 , 2 ∼ N( 2 , 2 ), . . ., () , () ∼ N( () , 2 ), and (2) 1 , 2 , . . ., () ∼ N( , 2 ), then the class normal distribution for signal parameter in emitter type , N(μ , σ Analysis.The discriminating power of signal parameter for each emitter type pair - V could be evaluated by the probability that a parameter measurement from one emitter type is misclassified into another emitter type according to the inferred class distribution N( , 2 ) and N( V , 2 V ).We define Φ( , , ) = ∫ −∞ ( , , ) and ( , , ) = (1/ √ 2 ) exp(−( − ) 2 /2 2 ).Based on that, we further define the function ( , , V , V , ) as indicated in ( , , V , V , ) , , ) if ( , , ) < ( V , V , ) of our method and the benchmark point value methods when the training data size varied between 20 and 60.As can Mathematical Problems in Engineering The number of emitter types Ω : The set of observations in emitter type : The number of signal parameters : The total number of observations : The number of independent measurements for each parameter and each observation : An interval-valued measurement = [ , ] μ : The estimated mean value of measurements for parameter from observation σ : The estimated standard deviation of measurements for parameter from observation μ : The estimated mean value of measurements for parameter from emitter type σ : The estimated standard deviation of measurements for parameter from emitter type Σ : The mean value matrix Σ 2 : The variation matrix : The class distribution vector. :
6,153
2015-10-07T00:00:00.000
[ "Computer Science" ]
Probing a Scalar Singlet-Catalyzed Electroweak Phase Transition with Resonant Di-Higgs Production in the $4b$ Channel We investigate the prospective reach of the 14 TeV HL-LHC for resonant production of a heavy Higgs boson that decays to two SM-like Higgs bosons in the $4b$ final state in the scalar singlet extended Standard Model. We focus on the reach for choices of parameters yielding a strong first order electroweak phase transition. The event selection follows the $4b$ analysis by the ATLAS Collaboration, enhanced with the use of a boosted decision tree method to optimize the discrimination between signal and background events. The output of the multivariate discriminant is used directly in the statistical analysis. The prospective reach of the $4b$ channel is compatible with previous projections for the $bb\gamma\gamma$ and $4\tau$ channels for heavy Higgs boson mass $m_2$ below 500 GeV and superior to these channels for $m_2>500$ GeV. With 3 ab$^{-1}$ of integrated luminosity, it is possible to discover the heavy Higgs boson in the $4b$ channel for $m_2<500$ GeV in regions of parameter space yielding a strong first order electroweak phase transition and satisfying all other phenomenological constraints. I. INTRODUCTION After the discovery of the Higgs boson at the Large Hadron Collider (LHC) [1,2], understanding the details of electroweak symmetry-breaking (EWSB) in the context of the thermal history of the universe remains an important challenge for particle physics. In particular, it is possible that EWSB was accompanied by generation of the cosmic baryon asymmetry if new physics beyond the Standard Model (BSM) was active during that era. The Planck measurement of this asymmetry, characterized by the baryon-to-entropy density ratio Y B = n b /s, gives [3]: Explaining the origin and magnitude of Y B is a key problem for BSM scenarios. Electroweak baryogenesis (EWBG) is one of the appealing possibilities, in part due to its linking of Y B to EWSB and in part due to its testability in current and near future experiments. Three "Sakharov conditions" [4] need to be satisfied for a successful EWBG: baryon number (B) violation, C and CP violation, and departure from thermal equilibrium (through a strong first order electroweak phase transition) or a breakdown of CPT symmetry. In the Standard Model (SM), the first condition -baryon number -violation can be induced by the process of electroweak sphalerons. However, the CP violation in the SM is too feeble, and the EWSB transition is a crossover transition given the observed SM Higgs mass m h ∼ 125 GeV [5][6][7][8][9]. Therefore, the minimal SM cannot generate a successful strong first order electroweak phase transition (SFOWEPT). On the other hand, if new scalars exist in addition to the SM Higgs doublet, their interactions with the SM Higgs doublet may catalyze a SFOEWPT, thereby providing the necessary conditions for successful EWBG. 1 In this paper, we focus on the singlet extension to the SM, the xSM, which is proven to be able to give a SFOEWPT [10,11]. In the xSM, after EWSB, the gauge eigenstates of the singlet scalar and the SM Higgs doublet mix with each other to form the mass eigenstates h 1 (SM-like) and h 2 (singlet-like). Further, we restrict our study to searching for a signal of the on-shell production of the heavy singlet-like Higgs h 2 decaying into two SM-like Higgs h 1 (i.e. m 2 > 2m 1 ), because the regions of parameter space that can generate SFOEWPT simultaneously tend to enhance the h 2 h 1 h 1 tri-linear couplings [10][11][12]. Currently, the AT-LAS and CMS experiments are searching for a resonant di-Higgs signal through different 1 New CP-violating interactions would also be required, a topic we do not treat further here. Higgs decay final states: 4b [13,14], bbW W * or bbZZ * [15,16], bbτ τ [17,18], bbγγ [19,20], W W * W W * [21], and γγW W * [22]. Thus far, no significant excess over SM backgrounds has been observed. On the theoretical side, several studies have been performed in the parameter regions that are viable for SFOEWPT. The singlet-like h 2 with a relatively light mass (∼270 GeV) can be discovered in the bbτ τ final state at the 14 TeV LHC with a luminosity of 100 fb −1 [12]. In the bbγγ and 4τ final states, a discovery is possible for m 2 up to 500 GeV at the 14 TeV high-luminosity LHC (HL-LHC) with a luminosity of 3 ab −1 [23]. In the bbW W * final state, a resonant signal can be discovered for m 2 in the range between 350 GeV and 600 GeV at the 13 TeV LHC with a luminosity of 3 ab −1 [24]. In this paper, we study the prospective discovery/exclusion in the 4b final state at the 14 TeV HL-LHC with a luminosity of 3 ab −1 . To that end, we first identify 22 benchmark points with m 2 ∈ [300, 850] GeV that produce the maximal and minimal di-Higgs signal rate σ h 2 × BR(h 2 → h 1 h 1 ) in consecutive 50 GeV intervals. The selected benchmark points satisfy all the current phenomenological constraints from the Higgs signal rate and electroweak precision data, and also satisfy the theoretical constraints from vacuum stability, perturbativity, and a SFOEWPT. We perform a full simulation of signal and background processes with the MadGraph5 parton level event generator [25] using PYTHIA6 [26] to simulate the parton shower and the DELPHES3 fast detector simulation [27]. Further, we use the TMVA package [28] to implement the Boosted Decision Tree (BDT) algorithm to optimize the event selection, finally obtaining the signal significance from the BDT score distributions of background and signal events. Based on this analysis and the results shown in Fig. 4 below, we arrive at the following conclusions: • For singlet-like Higgs masses below 500 GeV, the significance of the 4b final state is competitive with the bbγγ and 4τ final states, and it is possible to make a discovery at the 14 TeV HL-LHC with a luminosity of 3 ab −1 for some portions of the SFOEWPTviable parameter space. • For singlet-like Higgs masses above 500 GeV, the significance of the 4b final state is higher than in the bbγγ and 4τ final states but somewhat below recent projections for the bbW W * final state. • With the results of the benchmark models that produce minimal di-Higgs signal rate, we found that it is impossible to exclude (at the 95% confidence level) all portions of parameter space consistent with a SFOEWPT and present phenomenological constraints at the HL-LHC. The discussion of our analysis leading to these conclusions is organized as follows: Sec. II introduces the xSM framework and describes both theoretical and phenomenological constraints. In Sec. III, we describe the requirements for a SFOEWPT and the parameter scan. In Sec. IV, we discuss the simulation and analysis of the 4b signal and background in detail and also present prospects for the 14 TeV HL-LHC. Section V is dedicated to the conclusions. In the Appendix, we perform a global analysis of ATLAS Run 2 single Higgs measurements and present the distributions of the kinematic variables used in the BDT analysis. A. The Model The most general, renormalizable scalar potential in the xSM model is given by: where S is the real singlet and H is the SM Higgs doublet. When S obtains a vacuum expectation value (vev, see below), the a 1 and a 2 parameters induce mixing between the singlet scalar and the SM Higgs doublet, thereby providing a portal for the singlet scalar to interact with other SM particles. A Z 2 symmetry is present in the absence of a 1 and b 3 terms, a necessary condition for S to be a viable dark matter candidate. In what follows, however, we retain both parameters in our study as they play an important role in the strength of the electroweak phase transition (EWPT) and also in the di-Higgs signal rate at collider experiments. After EWSB, H → (v 0 + h)/ √ 2 with v 0 = 246 GeV, and S → x 0 + s where x 0 is the vev for S without loss of generality. The stability of the scalar potential requires the quartic coefficients along all the directions in the field space to be positive. This translates into a requirement of a positive Hessian determinant of the potential with respect to fields s and h: det This leads the bounds λ > 0, b 4 > 0 and a 2 > −2 √ λb 4 . Another way to obtain these bounds is by parameterizing (h,s) as (r cos α, r sin α) in the field space, and we are able to extract the quartic coefficients of r along the α direction in the field space: Requiring the above expression be larger than zero for any value of cos α also leads to the same conditions. Utilizing the minimization conditions, one can express two potential parameters in Eq. (2) in terms of the vevs and other parameters: Two additional conditions need to be satisfied for (v 0 , x 0 ) to be a stable minimum. One of them is that (v 0 , x 0 ) minimizes the potential locally, implying that: Also, this minimum point should be a global minimum, a requirement that we impose numerically. As for the perturbativity consideration, we have the following naïve requirements on the quartic couplings: However, as discussed in Sec. III when scanning over the parameter space for benchmark points we implement more stringent bounds on those parameters compared with the above requirements. One may refer to Refs. [29][30][31] for more details about the perturbativity bound in the xSM. Now we obtain the elements of the mass-squared matrix by: After the diagonalization of the above mass matrix, the physical masses of two neutral scalars can be expressed as: with m 2 > m 1 by construction. The mass eigenstates and gauge eigenstates are related by a rotation matrix: where h 1 is the SM-like Higgs boson with m 1 = 125 GeV, and h 2 is identified as the singletlike mass eigenstate. The mixing angle θ can be expressed in terms of the vevs, physical masses and potential parameters: From Eq. (11), one can observe that the couplings of h 1 and h 2 to the SM vector bosons and fermions are rescaled with respect to their SM Higgs couplings: where XX represents final states consisting of pairs of SM vector bosons or fermions. In this case, all the signal rates associated with the single Higgs measurements are rescaled by the mixing angle only: where σ h 1 and BR are the production cross section and branching ratio in the xSM, and the quantities with the superscript SM are the corresponding values in the SM. In the xSM for m 2 > m 1 , we have BR = BR SM because the partial width of each decay mode is rescaled by cos 2 θ and there is no new decay channel appearing. In order to investigate the di-Higgs production, we also require the tri-Higgs couplings. The one relevant for the resonant di-Higgs production is λ 211 : In this work, we focus on the situation where m 2 > 2m 1 such that a resonant production of h 2 and a subsequent decay to h 1 h 1 is allowed. Therefore, we are able to calculate the partial width Γ h 2 →h 1 h 1 : and the total width of h 2 : where Γ SM (m 2 ) represents the total width of the SM Higgs boson with a mass of m 2 , which is taken from Ref. [32]. The signal rate for pp → h 2 → XX normalized to the SM value is given by: which will be used to constrain the parameter space in the next section. The production cross section for the process pp → h 2 → h 1 h 1 can also be calculated: where s θ ≡ sin θ and, for future reference, c θ ≡ cos θ. B. Phenomenological Constraints on the Model Parameters The mixing angle θ between the singlet and the SM Higgs doublet in the xSM is constrained by measurements of the single SM-like Higgs signal strengths. We obtain a 95% C.L. upper limit on sin 2 θ of 0.131 by performing a global fit with current ATLAS Run 2 single Higgs measurements as discussed in Appendix A 1. The constraints on the (m 2 , c θ ) plane can be found in our previous work [24]. We will also guarantee each benchmark point in the parameter scan in the next section satisfies all the limits mentioned above. Finally, we discuss the constraints from electroweak precision observables ( In the xSM, the parameter U = 0 is a good approximation; we therefore focus only on the deviations in the S and T parameters, which we take from the Gfitter group [41]: where ρ ij is the covariance matrix in the (S,T ) plane. Again, we will impose the criteria in the parameter scan in the next section such that for each benchmark point, ∆χ 2 (m 2 , c θ ) defined below is less than 5.99, which corresponds to deviations of S and T parameters within 95% C.L.: where the ∆O 0 i denote the central values in Eq. (21) and (σ 2 ) ij ≡ σ i ρ ij σ j , with σ i being the error in S or T as indicated in Eq. (21). One can observe from Fig. 1 in Ref [24] that in general the upper limit for sin 2 θ extracted from EWPO is more stringent than the bound obtained from the Higgs global fit, with a limit changing from 0.12 for m 2 = 250 GeV to 0.04 for m 2 = 950 GeV. The character of EWPT is understood in terms of the finite-temperature effective po- However, the fact that the standard derivation of V T =0 ef f suffers from gauge dependence is well known which is discussed in depth in Ref. [42]. Here we employ a hightemperature expansion to restore the gauge independence in our analysis (see Ref. [43] for details). In such a case, we include in our finite temperature effective potential the T = 0 tree level potential and the gauge-independent thermal mass corrections to V T =0 ef f , which are crucial to restore electroweak symmetry at high temperature. In this limit, the a 1 and b 3 parameters will generate a tree-level barrier between the broken and unbroken electroweak phases, thereby allowing for a first-order EWPT. We also note that the presence of the a 2 term may also strengthen the first order transition, as discussed in Ref. [10]. In the hightemperature limit, we follow Ref. [10,44] and write the T -dependent, gauge-independent (indicated by the presence of a bar) vevs in a cylindrical coordinate representation as withv(T = 0) = v 0 andx(T = 0) = x 0 . The critical temperature T c is defined as the temperature at which the broken and unbroken phases are degenerate, i.e. Once the critical temperature is found, one is able to evaluate the quenching effect of the sphaleron transitions in the broken electroweak phase (see, e.g., Ref. [45]), which is related to the energy of the electroweak sphaleron that is proportional to the vev of SU(2) L doubletv(T ). A first-order EWPT is strong when the quenching effect is sufficiently large, and the criterion is approximately given by: To select the benchmarks parameter points for the collider simulation, we perform a scan over the parameters a 1 , b 3 , x 0 , b 4 , and λ within the following ranges: while the remaining parameters are fixed from the input values of v 0 = 246 GeV and m h = 125 GeV. Our lower bound on quartic couplings b 4 and λ guarantees tree-level vacuum stability. We also require a naïve perturbativity bound on the Higgs portal coupling a 2 /2 5. For each set of randomly chosen parameters, we calculate c θ , m 2 , and λ 211 , and only keep the points that satisfy all the phenomenological constraints mentioned in the previous section (Higgs signal rate, LHC search for heavy Higgs h 2 , and EWPO). We then pass these sets of parameters into the CosmoTransitions package [46] and numerically evaluate all the quantities related to the EWPT, such as critical temperature, sphaleron energy, tunneling rate into the electroweak symmetry-broken phase, using as an input the xSM finite temperature effective potential in the high-temperature limit. Finally, we only keep the sets of parameters that satisfy the strong first-order EWPT criterion defined above and also have a sufficient tunneling rate to prevent the universe from remaining in a metastable vacuum. From the randomly chosen parameters satisfying the foregoing requirements, we identify benchmark points with maximum and minimum signal rate from 11 consecutive h 2 mass windows of width 50 GeV ranging from 300 to 850 GeV. The upper bound of m 2 = 850 GeV is obtained by the observation that we did not find a choice of parameters for m 2 larger than 850 GeV that give a SFOEWPT, even though our scan in m 2 reaches one TeV. We list all the benchmark points in Tables I and II. We would like to mention that the benchmark points B3 and B4 for maximum signal rate in Table I has already been excluded by the CMS h 2 → ZZ search [33], but we retain them here to make contact with the results of previous studies for comparison. In contrast, the new ATLAS and CMS limits on resonant di-Higgs production in the bbτ τ channel [17,18] do not yet appear to constrain the SFOEWPT-viable parameter space. A. Reproduction of 13 TeV LHC results For the signal process, the h 2 mass is varied from 300 GeV to 1500 GeV in steps of 100 GeV. For the background processes, we generate pp → 4b and pp → tt with top quarks decaying hadronically. We follow the ATLAS resolved analysis in Ref. [47], and reproduce the signal efficiency and background distributions in Figs. 1 and 2 The default CMS DELPHES card is used rather than the ATLAS DELPHES card as it better approximates the b-tagging and jet reconstruction performance. Jets are constructed using the anti-k t clustering algorithm with a radius parameter R set to 0.4, and the efficiency for a b-quark-initiated jet to pass the b-tagging requirements is parameterized as a function of the jet transverse momentum p T in a manner corresponding to an average 70% efficiency working point described in Ref. [49]. (This is the default setting in the DELPHES CMS card). Benchmark cos θ The selection criteria for the ATLAS analysis are as follows: • Events must have at least four b-tagged jets with p T > 40 GeV and |η| < 2.5. If the number of b-tagged jets is greater than four, the four jets with the highest p T are selected to reconstruct two dijet systems in each event. • Two dijet systems are formed using the selected b-tagged jets. The two jets in each dijet system are required to have ∆R < 1.5 and the transverse momentum of the leading (subleading) dijet system must be greater than 200 (150) GeV. • The leading and subleading dijet systems must satisfy the following set of requirements depending on the reconstructed invariant mass (m 4j ) of the four selected b-tagged jets: The central values for m lead 2j and m subl 2j in the above equation are somewhat lower than in the ATLAS analysis [47] to account for differences in the treatment of jets in DELPHES compared to the ATLAS simulation. The acceptance times efficiency values for signal events with m 2 ranging from 500 to 1000 GeV are compared with the ATLAS results in Fig. 1. Overall, the signal region efficiencies obtained in this analysis agree well with those from Ref. [47]. The background event yields in the signal region are summarized in Table III. In addition to the yields from 4b and tt production, the contribution from bbcc production with the cquark jets passing the b-tagging requirements is estimated assuming that the kinematic distributions of jets in bbcc events are similar to those of 4b events: where N 4b is the estimated number of QCD 4b events, σ bbcc and σ 4b are parton level cross sections for bbcc and 4b processes, tag • Dijet systems are formed such that the separation ∆R jj between the two jets satisfies the following requirements: • If more than one pair of dijet systems satisfies this constraint, the pair with the smallest variable D h 1 h 1 is selected with In order to optimize the separation between signal and background events, the analysis in this paper relies on a BDT trained on half of the simulated signal and background events and validated with the other half. Separate training is performed for each benchmark point studied. The kinematic quantities included in the training of the BDT are Among those variables, ∆R lead jj , ∆R subl jj , and m 4j are consistently ranked high in terms of discrimination power for all benchmark points. To derive the optimal sensitivity, BDT score distributions for signal and background events are rebinned such that each bin contributes the maximum S/ √ B (S and B are the numbers of signal and background events in that bin), starting from the bin with the highest BDT score where the signal contributes the most. This rebinning also requires a minimum of ten background events per bin to minimize the impact of statistical fluctuations. As an illustration, the rebinned BDT score distributions for two benchmark points are shown in Fig. 3. The production cross sections and the efficiencies of backgrounds before the BDT selection are summarized in Table IV probability that the background-only model yields an observed number of events at least as large as the expectation for the signal plus background model, is then translated into the corresponding N σ Gaussian significance. As a test of the statistical analysis, it was verified that the 95% upper limit on the cross section as a function of resonance mass derived from our emulation of the 13 TeV ATLAS analysis (discussed in Sec. IV A) agrees with the results from Ref. [47] within 10% for h 2 masses up to 750 GeV and within 20% up to 850 GeV. The slight deviation at higher mass may be due to the use of the asymptotic formula which is known to produce upper limits that are too aggressive for the low number of expected events at high mass with only 3.2 fb −1 of luminosity. The significance N σ as a function of resonance mass is shown in Fig. 4, where the upper and lower boundaries of the band correspond to the influence of uncertainties in the produc-tion cross sections for the 4b and tt backgrounds as given in Table IV. The two boundaries are obtained by coherently changing the number of events for the two backgrounds by the 1σ uncertainties listed in Table IV, The significance is compared to that obtained with the same method for the bbγγ and 4τ channels at the 14 TeV HL-LHC [23] and for the bbW W channel at the 13 TeV LHC [24] in Fig. 5. We only compare the benchmark points from BM3 to BM11 because the first two BM points are different from those in Ref. [23]. We find that for a heavy Higgs mass m 2 less than 500 GeV, the bbγγ channel is the most sensitive channel in the search for a resonant di-Higgs signal. Moreover, the 4b channel is competitive with the bbγγ channel, which could serve as a complementary check if a signal is observed in the bbγγ channel. However, for m 2 larger than 500 GeV the 4b channel provides better sensitivity than the bbγγ or 4τ channels but not as good as the bbW W * channel [24]. We note, however, that the analysis given in Ref. [24] employs a novel Heavy Mass Estimator (HME) and assumptions that the systematic uncertainties will be improved compared to those quoted in the recent CMS bbW W * analysis [16] that did not implement the HME. These differences may account for the stronger projected limits given in Ref. [24] than one would infer by rescaling the results in Ref. [16] by the improved statistics expected for the HL-LHC [57]. We also note that Ref. [24] assumes an ATLAS-CMS combination, thereby doubling the number of events. We do not make such an assumption in the present study. V. CONCLUSION Investigating the thermal history of EWSB is important for determining whether or not the cosmic matter-antimatter asymmetry was generated through EWBG. Monte Carlo simulations indicate that the EWSB transition is cross over in the minimal SM, given the the observed Higgs mass. In this context of a SM-only universe, EWBG cannot occur. However, introducing new scalar degrees of freedom can change the behavior of thermal obtained from Ref. [23] whereas those for the bbW W * channel are obtained from Ref. [24]. effective potential and make the SFOEWPT possible during the EWSB era. Adding a real scalar singlet is one of the simplest ways to extend the SM -yielding the xSM -and realize this possibility. Previous studies have demonstrated the existence of a strong correlation between an enhanced coupling of a heavy singlet-like scalar to a SM-like di-Higgs pair and the occurrence of a SFOEWPT in the xSM parameter space. Therefore, there exists strong motivation to search for resonant production of heavy singlet-like scalar that decays to SM-like di-Higgs state as a probe of SFOEWPT in the xSM. In this paper, we focused on the possibility of discovering at the HL-LHC a resonant gluon fusion production of the heavy singlet-like scalar in the xSM that decays into a pair of SMlike Higgs with a four b-quark final state. The four b-quark final state is a promising channel, given its large branching ratio, but it also suffers from a significant QCD background. In analyzing this process, we first validated our simulation against the ATLAS 13 TeV cutbased analysis, then implemented the BDT, a multi-variable analysis method to help to classify signal and background events for the HL-LHC. We selected 11 benchmark points for both maximum and minimum di-Higgs signal rates that yield a SFOEWPT and that satisfy all the theoretical and phenomenological bounds for a heavy singlet-like scalar mass in successive 50 GeV energy bins ranging from 300 to 850 GeV. We then analyzed the signal significance for the 14 TeV HL-LHC with a luminosity of 3 ab −1 . We also compared the results with earlier projections for the bbγγ and 4τ channels and find that for the mass of the singlet-like scalar larger than 500 GeV, the significance for the 4b channel is superior to both of these other channels. For heavy singlet-like scalar mass less than 500 GeV, the significance for the 4b state with maximum signal rate can be larger than 5. This significance is comparable to that of the bbγγ final state, and is somewhat better than that projected for the 4τ final state. While our projection for the reach using the 4b channel is somewhat below that for the bbW W * channel as analyzed in Ref. [24], the latter work utilized a new Heavy Mass Estimator and assumptions about future reductions in systematic uncertainties that await validation with new data. Thus, inclusion of the 4b channel in a comprehensive search strategy that also includes the bbγγ, 4τ , and bbW W * channels is strongly motivated. In terms of exclusion, we find that for the future 14 TeV HL-LHC, one can exclude the mass of a heavy singlet-like scalar up to around 680 GeV for the benchmark points with maximum signal rate. However, a signal in the case of the minimum signal rate benchmark points is far from being excluded. Therefore in this sense, a future 100 TeV pp collider may be required to fully exclude the possibility of generating SFOEWPT in the xSM. Appendix B: Distributions of BDT variables We plot the signal and background distributions of kinematic variables used in the BDT analysis here. The signal is taken to be the benchmark point B7 in Table I.
6,655.6
2019-06-12T00:00:00.000
[ "Physics" ]
The Heidelberg Basin drilling project : Geophysical pre-site surveys Currently, the Heidelberg Basin is under investigation by new cored research boreholes to enhance the understanding concerning the control on Pliocene and Quaternary sedimentation by (neo)tectonics and climate. The Heidelberg Basin is expected to serve as a key location for an improved correlation of parameters that characterise the climate evolution in North Europe and the Alpine region. The recovery of sediment successions of high temporal resolution that are complete with respect to the deposition of Pleistocene glacials and interglacials in superposition is of special importance. Prior to the new research boreholes in Viernheim and Heidelberg geophysical pre-site surveys were performed to identify borehole locations that best achieve these requirements. In the area of the Heidelberg Basin the strongest negative gravity anomaly of the entire Upper Rhine Graben is observed (apart from the Alps), hinting at anomalously thick sediment deposits. However, especially refl ection seismic profi les contributed signifi cantly to the decision about the borehole locations. In the city of Heidelberg for the fi rst time, the depocentre of the Heidelberg Basin, as indicated by additional subsidence compared to its surroundings, was mapped. In this area, sediments dip towards the eastern margin of the Upper Rhine Graben. This is interpreted to represent a rollover structure related to the maximum subsidence of the Upper Rhine Graben in this region. At the Viernheim borehole location the seismic survey revealed several faults. Although these faults are mainly restricted to depths greater than 225 m, the borehole location was fi nally adjusted with respect to this information. [ Das Bohrprojekt Heidelberger Becken: Geophysikalische Voruntersuchungen] Kurzfassung: Das Heidelberger Becken wird aktuell durch neue Kernbohrungen untersucht, um das Wissen hinsichtlich der Steuerung der pliozänen und quartären Sedimentation durch Klima und (Neo)Tektonik zu erweitern. Es wird erwartet, dass das Heidelberger Becken eine Schlüsselstelle für eine verbesserte Korrelation von Parametern darstellt, welche die Klimaentwicklung in Nordeuropa und im alpinen Raum charakterisieren. Besondere Bedeutung hat daher die Gewinnung von Sedimentsukzessionen hoher zeitlicher Aufl ösung, die im Hinblick auf die Ablagerung kaltund warmzeitlicher pleistozäner Sedimente in Superposition möglichst vollständig sind. Im Vorfeld der neuen Kernbohrungen bei Viernheim und Heidelberg wurden geophysikalische Vorerkundungen durchgeführt, um Bohrlokationen zu identifi zieren, die diesen Ansprüchen am besten genügen. Im Bereich Heidelberg wird die größte negative Schwereanomalie des gesamten Oberrheingrabens beobachtet (mit Ausnahme der Alpen), was auf ungewöhnlich mächtige Sedimentablagerungen hindeutet. Aber insbesondere refl exionsseismische Messungen haben zur Auswahl der Bohrpunkte beigetragen. Im Stadtbereich von Heidelberg ist zum ersten Mal das Depozentrum des Heidelberger Beckens kartiert worden, abgebildet durch eine zusätzliche Absenkung gegenüber der Umgebung. In diesem Gebiet fallen die Sedimente zum östlichen Grabenrand hin ein. Dies wird als ‘Rollover’ Struktur interpretiert, die in Verbindung mit der maximalen Subsidenz des Oberrheingrabens in diesem Bereich steht. An der Bohrlokation Viernheim konnten durch die SeisThe Heidelberg Basin drilling project: Geophysical pre-site surveys HERMANN BUNESS, GERALD GABRIEL & DIETRICH ELLWANGER* *Adresses of authors: H. Buness, Leibniz Institute for Applied Geophysics, Stilleweg 2, D-30655 Hannover, Germany. E-Mail<EMAIL_ADDRESS>G. Gabriel, Leibniz Institute for Applied Geophysics, Stilleweg 2, 30655 Hannover, Germany. E-Mail<EMAIL_ADDRESS>D. Ellwanger, Landesamt für Geologie, Rohstoffe und Bergbau im Regierungspräsidium Freiburg, Alberstraße 5, 79104 Freiburg im Breisgau, Germany. E-Mail<EMAIL_ADDRESS>Eiszeitalter und Gegenwart Quaternary Science Journal 57/3–4 338–366 Hannover 2008 Introduction The Heidelberg Basin is located in the eastern part of the Northern Upper Rhine Graben (URG; Fig. 1), bordered by the dominating master fault of the URG to the east.The boundary fault is assumed to extend deep into the crust, by 15 -24 km according to MAUTHE, The Heidelberg Basin drilling project: Geophysical pre-site surveys BRINK & BURRI (1993).The sedimentary fill of the URG is characterised by synthetic and antithetic faults which strike parallel or subparallel to this boundary fault.The area of the Heidelberg Basin was subjected to continuous and strong subsidence since late Oligocene (SCHUMACHER 2002).Up to 1500 m sediments were deposited in early Miocene alone (comprising Cerethia, Corbicula, and Hydrobia beds).The thickest succession of Quaternary sediments can be found here (up to 350 m according to BARTZ 1974;Fig. 2, top).The Heidelberg Basin constitutes therefore the most complete sediment archive of the whole URG.Sediments of different geosystems interfere with each other: the local system of the Neckar River (Odenwald), the regional Upper Rhine Graben-Highlands (Black Forest, Vosges) system, and the supra-regional Alps-Upper Rhine Graben system.Especially the distal deposits of the system Alps -Upper Rhine Graben are supposed to contain information about both the alpine and north European climate history, which can not be observed elsewhere (e.g.ELLWANGER et al. 2005).The term Heidelberg Basin stands for the Miocene to Quaternary depocentre of the northern URG with an extent of some tens of km.The term 'Heidelberger Loch', sometimes also found in literature, was introduced by SALOMON (1927) and denotes the locally, very delimited centre of subsidence around the city of Heidelberg.The exploration of this sediment archive is the aim of two new research boreholes sponsored by the Leibniz Institute of Applied Geophysics (LIAG -former Leibniz Institute for Applied Geosciences, GGA-Institut) and the geological surveys of Baden-Württemberg (LGRB) and Hessen (HLUG) (Fig. 2, bottom).One of the boreholes was drilled exactly in the depocentre of the basin, close to the outlet of the Neckar River, from the Odenwald to the plain of the Upper Rhine Graben ('Heidelberg UniNord ', ELLWANGER et al. 2008).The other one was drilled in the geographic centre of the Heidelberg Basin about 17 km to the northwest, near the city of Viernheim (HOSELMANN 2008).These two locations are complemented by two boreholes in Ludwigshafen that were drilled on the 'Parkinsel' island on the western margin of the basin (Fig. 2, bottom; WEIDENFELLER & KNIPPING 2008).Prior to drilling these boreholes, a number of seismic profi les were carried out to facilitate a profound drilling design (Fig. 2; bottom).This included an estimation of depths for the geological targets, i.e.Pliocene and Pleistocene strata as well as the detection of possible fault zones.The latter point was important because the boreholes were to serve as a stratigraphic reference for the Quaternary sediment succession.Furthermore, high resolution seismic profi les in general are able to reveal tectonic and sedimentological events caused by the continued subsidence of the basin.This article presents the results of the pre-site surveys.However, at the present stage, as there is not even a consistent and homogenous interpretation of all available borehole data, it is not possible to make conclusions about the basin dynamics (e.g. a seismostratigraphic interpretation).A more exhaustive analysis of the data could be done in the framework of a project, which was proposed to the German Research Foundation and is currently under evaluation. Geological setting and nomenclature Knowledge about the deeper subsurface of the URG is based mainly on seismic profi ling and evaluation of boreholes of the oil and gas industry, which was quite intensively done in this area.According to MAUTHE, BRINK & BURRI (1993), about 5000 km seismic lines were recorded between 1970 and 1992.An extensive presentation of these activities is not possible due to the lack of corresponding publications.However, isobath maps derived from these data were published, e.g. for the Tertiary by DOEBL & OLBRECHT (1974) or for the Quaternary by BARTZ (1974).Two deep-reaching refl ection seismic profi les running across the southern and northern URG were carried out in 1988 in the framework of the DEKORP-ECORS Project (BRUN & GUTSCHER 1992).The northern profi le crosses the URG HERMANN BUNESS, GERALD GABRIEL & DIETRICH ELLWANGER The Heidelberg Basin drilling project: Geophysical pre-site surveys about 20 km north of Heidelberg.The varying refl ector signature of the lower crust images the asymmetric structure of the URG.The largest Tertiary sediment thickness of 3400 m is observed close to the eastern rim of the graben, it reduces stepwise to 300 m below the western border.However, depths corresponding to the Quaternary, the Pliocene, and down to the middle Miocene units are poorly resolved.The interaction between tectonic and sedimentation in the northern URG was investigated by DERER by means of sequence stratigraphy (DERER 2003, DERER et al. 2003, DERER, SCHU-MACHER & SCHÄFER 2005), who found the area to be divided into two halfgrabens with opposing tilt directions by a transfer zone.The study was based on seismic profi les and geophysical borehole measurements made by the oil and gas industry.Seismic facies were assigned to several lithostratigraphic units from Eocene to the upper Miocene.About 190 km of industrial seismic lines where shot between 1981 und 1993 in the Heidelberg Basin.However, these lines do not extend to the depocentre of the Heidelberg Basin ('Heidelberger Loch').Furthermore, only little information about the structure of Plio-Pleistocene sediments can be derived from the hydrocarbon seismic lines, because these focused only on storage or sealing horizons, and hence deep structures.The existing gap in the refl ection seismic data, near to the city of Heidelberg, can be partly fi lled by information from boreholes related to hydrogeological investigations.In the Heidelberg area two fl ush boreholes are available that reveal large thickness of Quaternary sediments: the 350 m deep Entensee borehole from 1973 (CONRADS & SCHNEIDER 1977), about 1 km north, and the 1022 m deep Radium-Sol Therme borehole from 1918 (SALOMON 1927), about 1 km south of the new borehole Heidelberg UniNord 1 (Fig. 3).Interpretation of these two boreholes is controversially discussed.The main problem here are the confusing defi nitions and applications of the terms Quaternary and Pliocene (and some other stratigraphic terms) in publications, reports and archives related to the URG.The most frequently applied 'traditional' defi nition relates 'Quaternary' to the onset of alpine sediments, i.e. to a change in sediment provenance.'Pliocene' sediments come from local sediment sources, e.g.Black Forest, Vosges, and Odenwald; 'Quaternary' sediments include or correlate with sediments of alpine origin.This change of provenance is, of course, a matter of lithostratigraphy, although chronostratigraphic terms are used.This different use of the term 'Quaternary' is illustrated by the controversial interpretations of the Radium-Sol Therme borehole by SALOMON (1927), BARTZ (1951) andFEZER (1997).SALOMON and BARTZ suggest depths of '382 m' and 'almost 400 m', both referring to the lithostratigraphic version of 'Quaternary'; FEZER suggests a depth of 650 m which is based upon a calculation of sedimentation rates i.e. he applies the term Quaternary in its proper chronostratigraphic sense.The aim of this paper is not to solve problems of stratigraphic nomenclature but report seismic and gravimetric activities prior to the new drilling activities.Our references are the old data (publications, reports, and archive data) which are, at this state, not re-interpreted but used as they are.Stratigraphic terms are set in quotation marks where we feel the authors use the lithostratigraphic version (e.g.'Quaternary'), and without quotation marks when used as chronostratigraphic term (e.g.Quaternary).A refl ection seismic profi le that focused especially on the Quaternary deposits was published by HAIMBERGER, HOPPE & SCHÄFER (2005).This river seismic profi le -recorded along the Rhine River between Mainz and Mannheim, and parts of the Neckar River over a length of 150 km in total -revealed high-resolution information which was able to defi ne the base of 'Pleistocene' sediments mainly north of Mannheim.Maximum thickness of 'Pleistocene' was found in the area of Mannheim and confi rmed the map published by BARTZ (1974;Fig. 2).This map images an increase of 'Pleistocene' sediment thickness towards the eastern boundary fault of the Upper Rhine Graben, where it amounts to more than 350 m.The Heidelberg Basin drilling project: Geophysical pre-site surveys HERMANN BUNESS, GERALD GABRIEL & DIETRICH ELLWANGER The interpretation of the river seismic data revealed well the alternating sequence of coarsegrained layers (aquifers, so called 'Kieslager') and fi ne-grained layers (aquitards, so called 'Zwischenhorizonte') typical of the Heidelberg Basin.This hydrostratigraphic classifi cation (Table 1) is often used in the northern part of the Upper Rhine Graben due to a lack of geochronological data concerning Pliocene and Pleistocene strata.It is based on a macroscopic description of the sediments, their colours and carbonate content, and as complementary information on gamma logs, where available.The hydrostratigraphy was broadly introduced by BARTZ (1982).It is presently much used in hydrogeology (HGK 1999) and was last updated by WEIDENFELLER & KÄRCHER (2008).On a larger scale it distinguishes between three coarse-grained layers separated by two fi ne-grained layers, which are regionally distributed.In the system after BARTZ (1982) this alternating succession is separated from the underlying provenance change by another unit called 'Altquartär' ('Early Quaternary').This term again is used as description of a sediment type rather than a stringent chronostratigraphic classifi cation.In more recent publications and reports, the lowest coarse-grained bed is defi ned as Early Quaternary (WEIDENFELLER & KÄRCHER 2008).The thickness of each layer is quite variable in a lateral direction.Locally, additional fi ne-or coarse-grained layers of smaller extent can be intercalated.The occurrence of fi ne-grained sediments was traditionally related to interglacial periods, and the occurrence of coarse-grained layers to glacial periods (BARTZ 1982).One of the aims of this drilling project is to correlate the before mentioned hydrostratigra- phy with the lithostratigraphy of the southern part of the Upper Rhine Graben (SYMBOL-SCHLÜSSEL GEOLOGIE BADEN-WÜRTTEMBERG). The upper gravel layer can be quite well correlated with the Mannheim Formation, and the upper interlayer (fi ne-grained layer) with the Ladenburg Horizon.The succession of the middle sand-gravel layer and lower sand-silt layer and the interlayer in between is only roughly equivalent to the Weinheim Beds (Table 1). Gravimetry Generally, gravity anomaly data is expected to refl ect a fi rst-order pattern of the shape of the Heidelberg Basin, especially varying thickness of (unconsolidated) Pliocene/Pleistocene sediments.The mean density of Plio-Pleistocene sediments should be reduced compared to Tertiary or even older and more compacted sediments.Therefore the anomalous thick sedimentary fi ll of the Heidelberg Basin should cause a The Heidelberg Basin drilling project: Geophysical pre-site surveys negative gravity anomaly that can be related to the the extent of the basin and its depth.But in fact the situation is more complicated. Gravity data Gravity data for the Upper Rhine Graben is available from several sources.ROTSTEIN et al. (2006) have compiled the most recent Bouguer gravity map of the Upper Rhine Graben comprising all the available data from France and Germany (Fig. 4; regional map).This map is based on about 33.000 Bouguer gravity values.About one third of the data was provided by the Leibniz Institute for Applied Geophysics covering the German part of the Rhine Graben.This dataset consists of data available from the Geophysikalische Reichsaufnahme, but also some local surveys.The French data is mainly based on two data sources, which are themselves compilations of numerous surveys.The fi rst dataset -about fi fty percent of the French data -was provided by the Bureau de Recherches Géologiques et Minières, the second one by Mines de Potasse d'Alsace. A complete Bouguer anomaly was recalculated for the entire new dataset, using a density of 2670 kg/m 3 and considering newly-calculated terrain reductions.The distribution of the gravity stations is strongly inhomogeneous; their spatial coverage varies signifi cantly from about 0,25 stations/km 2 in some parts of the Vosges and Black Forest and 40 stations/km 2 in some parts of the graben itself.Note, the complete Bouguer map is based on a calculated grid of 1 km.The area of the Heidelberg Basin (Fig. 4; local map) is characterised by the strongest gravity anomaly of the entire Upper Rhine Graben (apart from the most southern part where the regional trend is strongly affected by the Alps). It is in the order of -40 to -50 mGal, with the absolute minimum close to the outlet of the Neckar River, from the Odenwald in the plain of the Upper Rhine Graben (Fig. 4). Preliminary Interpretation When interpreting the anomaly apparently related to the sediment fi ll of the Heidelberg Basin some general knowledge regarding the source of gravity anomalies in the Upper Rhine Graben has to be considered.ROTSTEIN et al. (2006) performed some two-dimensional calculations along profi les approximately perpendicular to the strike of the graben.Although these profi les were restricted to the southern part of the graben and a two-dimensional approach is not suitable to investigate the complex graben geology in great detail, their general result showed, that the gravity anomalies are not only caused by the sediment fi ll of the graben, but that they are also strongly affected by density inhomogeneities in the crystalline basement.One of the profi les south of Karlsruhe revealed increasing sediment thicknesses from west to east accompanied by increasing Bouguer gravity values -the structure of the sediments was well constrained by refl ection seismic data and drilling information.Therefore, for this profi le the gravity effect of the sediments must be overcompensated by density contrasts in the basement.Density of the basement must be assumed to be of high lateral variation as that of rocks of the adjacent graben shoulders is.Therefore, some impact on the Bouguer anomalies observed in the area of the Heidelberg Basin must also be assumed.A preliminary 2-D profi le which crosses the Rhine Graben close to Heidelberg is shown in Fig. 5.Although the structural resolution of the sediments is rather low, the necessity to introduce at least lateral density contrasts at the western and eastern boundary faults and within the most western part of the basement is obvious.Nevertheless, the sediments of the Heidelberg Basin will undoubtedly contribute to the negative gravity anomalies observed in this region.Confi rming the map of 'Quaternary' thickness in the Upper Rhine Graben published by BARTZ (1974), the new refl ection seismic profi les recorded in the framework of the pre-site surveys reveal anomalously thick Neogene sediments. From the downhole logging experiments conducted in the Heidelberg Drilling Project, low densities between 2000 kg/m 3 and 2300 kg/m 3 can be estimated for the Pleistocene sediments (HUNZE & WONIK 2008), whereas Tertiary sedi- The Heidelberg Basin drilling project: Geophysical pre-site surveys HERMANN BUNESS, GERALD GABRIEL & DIETRICH ELLWANGER ments in the Upper Rhine Graben are known to have densities between 2350-2450 kg/m 3 , strongly depending on the amount of evaporates in the Early Oligocene strata, at least in the southern part of the graben (ROTSTEIN et al. 2006). Interpreting the observed gravity anomalies only in terms of varying sediment thickness, two main centres of deposition can be distinguished: one between Heidelberg and the Rhine River, and the second northeast of Landau (Fig. 4).Considering the information about Base 'Quaternary' (Fig. 2), the anomaly close to Landau is not related to anomalously thick Quaternary deposits.The source of this anomaly must be assumed to be at greater depth.(1943).Based on torsion balance data, models were derived that explain the observed horizontal gradient (in Eotvös, 1 E = 10 -9 s -2 = 0.1 mGal/km) running from the Odenwald in the east about 5 km into the Upper Rhine Graben in the west (Fig. 6).Therefore, only the local situation at the eastern graben boundary fault was investigated.The thickness of Pleistocene and youngest Plicocene sediments is estimated to be ~ 500 m at the Heidelberg Uni-Nord location.Furthermore in CLOSS's model the uppermost sediment unit dips with about 45° towards west -strongly contradicting the result of the new refl ection seismic surveys.Density contrasts in the crystalline basement of the URG in this region were not taken into account, although from the adjacent Odenwald a large variety of alkaline (high density) and acidic (lower density) rocks are known. Seismic survey and data processing Seismic measurements in urban areas often encounter considerable diffi culties.Restrictions exist for seismic sources (only low-energy seismic sources, services lines, endangerment of supply lines, services pipes, traffi c restrictions, etc.) as well as for the recording side (sealing of the surface, enhanced noise level, etc.).The design of seismic profi les is therefore more often dictated by logistics than by geological reasoning. At the start of the project a borehole location in Heidelberg was proposed at the abandoned freight depot south of the Neckar River (Fig. 7).A 1.5 km long profi le (profi le 1) was therefore measured, that showed strong inclinations of deep refl ectors.To check the true inclination of the refl ectors, the data were supplemented by a second profi le later on (profi le 4).The profi les could not be connected to existing industry profi les (the next one being about 1.5 km apart to the SW), but the interpretation could be done by comparison of refl ection patterns.The borehole Radium Sol Therme could barely be The Heidelberg Basin drilling project: Geophysical pre-site surveys After some encouraging results, a longer north -south trending profi le (profi le 3) was registered.However, continuation to the exploration Abb.8: Seismischer Hydraulikvibrator, der für hoch aufl ösende oberfl ächennahe Messungen entwickelt wurde. Profi The Heidelberg Basin drilling project: Geophysical pre-site surveys borehole 'Schriesheim', about 1 km further north (Fig. 2), could not be established.After fi xing the location of the research borehole 'Heidelberg UniNord 1', another short profi le (profi le 5) was shot to check for possible fault zones.In this area signal quality was degraded by waves originated from service pipes (called 'pipe waves' for simplicity further on).These are especially visible on profi le 2 and profi le 5. The two profi les at the Viernheim location (profi les 6 and 7 in Fig. 2) were laid out perpendicular to each other, with the proposed position of the research drilling at their crossing point.The profi les were surveyed in a forested area without further problems.All profi les were shot by a small hydraulic vibrator (Fig. 8), which yields a maximum peak force of 30 kN and a frequency range of 20 -500 Hz (for recording parameters cp.Table 2).This vibrator was developed especially for high-resolution shallow profi ling (BUNESS & WIEDERHOLD 1999, VAN DER VEEN et al. 2000).It is very appropriate for use in an urban environment, due to its relatively low impact on the surface, compared to e.g.weight dropping or other impulsive sources.Data were processed using a commercial processing system (ProMAX, Landmark Corp.).Processing steps are listed in Table 3, details regarding the single steps are discussed e.g. in YILMAZ (2001).In addition, to the processing steps listed in Table 3, waves in profi les 2 and 5 were suppressed by trace-mixing algorithms.Trace mixing, although a very simple algorithm, proved to be most effective of a variety of other algorithms, including spatial 2D fi lters, f-k based fi lters and eigenvector fi ltering.However, the incoherency of this kind of noise prevents elimination without damaging the refl ection signal.The velocity fi elds for fi nite-difference migration and for the subsequent depth conversion were derived from the smoothed stacking velocity fi elds.The deviations from the velocities derived by the VSP (vertical seismic profi le) measurements carried out in the UniNord 1 borehole down to a depth of 180 m turned out to be very small, hence the former were kept during later processing.The structure of the uppermost low velocity layer down to a depth of 20 m could not be determined adequately, causing some uncertainty about the overall depth level of the profi les.As a consequence the depth level was calibrated using the VSP refl ections, which have a known depth.A reference level of 100 m a.s.l. was chosen for all profi les.FX deconvolution (60 -180 Hz) 13. depth conversion Heidelberg -the region north of the Neckar River The profi le which is best tied to an existing deep borehole is the north-south trending profi le 3 in the area north of the Neckar River: the Schriesheim borehole is located about 1 km further north (Fig. 2).Although no direct connection between the newer seismic line and the borehole exists, the geological structures can be controlled by hydrocarbon seismic lines. Both, E-W and N-S trending profi les that cross the Schriesheim borehole are available. HERMANN BUNESS, GERALD GABRIEL & DIETRICH ELLWANGER Heidelberg Basin investigated with the new refl ection lines no distinct information is provided by the velocity fi elds.An estimation of the velocity fi eld of profi le 3 (Fig. 10) reveals interval velocities that increase gradually from 1600 m/s near the surface to 3200 m/s at a depth of 1500 m.These velocities are deduced from stacking velocities, which do not constitute physical seismic velocities, since they are affected by layer dips, side refl ections, diffractions and other seismic particularities.The reliability of stacking velocities decreases strongly with depth, depending on the maximum offset of the seismic survey.The velocities cannot be controlled by other methods, e.g.VSP measurement, due to the lack of deep VSP (vertical seismic profi le) or sonic measurements.However, they coincide The Heidelberg Basin drilling project: Geophysical pre-site surveys quite well with the regional velocity trend derived from deep boreholes in the Heidelberg Basin.A remarkable feature is a zone of low velocity, which has a depth of approximately 180 -250 m at the northern edge of the profi le and of 300 -450 m at its southern edge.The upper limit of this zone corresponds therefore well with the lower boundary of the coarsegrained Weinheim beds (cf.Fig. 3).Profi le 2 is located between the Neckar River and the eastern boundary fault of the Upper Rhine Graben (see Fig. 7).The profi le approaches at its eastern edge the topographic border of the URG and hence the master boundary fault of the URG. HERMANN BUNESS, GERALD GABRIEL & DIETRICH ELLWANGER Both the unmigrated time section (Fig. 11) and the migrated depth section (Fig. 12) are presented, so the reader can better judge the infl uence of the migration processing step.Generally, the refl ections dip to the east, with dips becoming greater with increasing depth.A wedge-shaped zone at the easternmost position (marked by the dotted line in Fig. 11) shows no coherent signal.This wedge is probably due to the eastern master fault which juxtaposes the sedimentary infi ll of the URG against the crystalline basement of the Odenwald.A very rough estimation of its dip yields values of approximately 80°.Adjacent to this wedge, fl at or slightly westward dipping refl ectors are displayed at travel times greater 700 ms.This feature is interpreted as a drag fold.Below 700 ms, refl ections again dip towards east.Some hints for diffractions can be seen, that could be caused by the fault zone (e.g.CMP 260 at 800 ms).Areas of apparently low refl ectivity as observed in the youngest Quaternary, e.g. between CMP 110 and CMP 310, are caused by strong noise due to pipe waves.After migration (Fig. 12), the bending of the sediment strata next to the assumed boundary fault becomes more obvious.However, it is diffi cult to separate real refl ections from migration artefacts that always occur at the ends of a seismic profi le.Signals inside the marked triangle should not be considered as real refl ectors.The continuous subsidence of sediments beneath the central and western part of the profi le is revealed by an increasing dip angle with depth.The apparent dip of Base 'Quaternary' is 1.5° towards east and 4.5° for Base Pliocene.A fault with a distinct displacement is again only visible in the upper part of the Hydrobia beds. Profi le 5 was recorded with modifi ed acquisition parameters (smaller CMP spacing, smaller offset, cp.Table 2), which yielded a higher resolution (Fig. 13).Again, in the eastern part the image of Quaternary deposits is signifi cantly affected by pipe waves.At depths between 600 and 700 m, reduced refl ectivity is observed at the western end of the profi le.The dip angles of Base 'Quaternary' and Base Pliocene, 2° and 6° respectively, are slightly increased with respect to those of profi le 2. In the 'Heidelberg UniNord 1' borehole in 2006 (ELLWANGER et al. 2008), a vertical seismic profi le (VSP) was recorded (Fig. 14), as well as numerous other geophysical logging methods (HUNZE & WONIK 2008).To ensure comparability with the refl ection seismic profi les, the same vibrator and the same fi eld parameters were used ( The VSP corridor stack represents the locally-refl ected wavefi eld and corresponds with the surface refl ection profi ling results.Small discrepancies, as seen in Fig. 14, can be explained by different fi eld geometries and recorded frequencies.The corridor stack was used to calibrate the total static of the refl ection seismic lines, since the depths of its refl ections are determined directly. Heidelberg -the region south of the Neckar River The new refl ection seismic profi les 1 and 4 (Figs. 15,16) are located south of the Neckar River (see Fig. 7).Therefore a tie to existing deep wells is hardly possible.The observed refl ection patterns cannot be easily correlated with those of the hydrocarbon seismic lines, as assumed prior to the surveys.Therefore, the information from the Schriesheim well was correlated along several seismic lines and transferred to the nearest seismic profi le recorded by the hydrocarbon industry, which is about 1,5 km south of profi le 4. But the interpretation of the youngest sediments, e.g.Quaternary, remains especially uncertain, because these were not imaged well in the industrial seismic lines.Both profi les 1 and 4 intersect each other at an angle of about 50°.Again, on both profi les the apparent dip of the sediment fi ll increases with depth.For Base 'Quaternary' it amounts to 3°, for Base Pliocene it amounts to 9° (profi le 1) and 11° (profi le 4).Therefore, real dip is about 4° for the Base 'Quaternary' and 11° for the Base Pliocene, in each case towards east.This is about twice as much as these beds north of the Neckar River have.Similar to profi le 3, Quaternary deposits show quasi-continuous refl ections (e.g.profi le 1 in 300 m depth), alternating with weak and partly sub-parallel refl ections.Within the Hydrobia beds again a strong refl ector occurs that is also visible on the other seismic sections. Viernheim In contrast to the seismic profi les recorded in the Heidelberg area the seismic lines recorded in the vicinity of the Viernheim borehole (Figs.17, 18) allow a distinct classifi cation of the sediment fi ll.Tops of zones of high refl ectivity are visible at both depths of 220 m and 570 m, whereas especially the depth interval between 450 m and 570 m shows low refl ectivity. Considering the results of the research borehole Viernheim (HOSELMANN 2008) the refl ector at a depth of 228 m can be assigned to the transition between the 'Quaternary' and the underlying material of local provenance (reference height of the seismic line is 100 m above sea level, the drilling site is 97 m above sea level).The limnic-fl uviatile deposits of local provenance cause the observed high refl ectivity.Due to the high quality of the Viernheim data a complex fault pattern can be identifi ed.Especially on the E-W oriented profi le 6 (Fig. 17), a fault zone is observed that runs through all prominent refl ection horizons and penetrates deep into the Rhenish facies, e.g.into 'Pleistocene' sediments.Originally a drilling location close to the intersection of profi les 6 and 7 was favoured.Based on these seismic results it was shifted by about 500 m towards south where the sediment succession was expected to be less affected by faults.Due to the acquisition parameters and the processing of the seismic data, refl ectors above HERMANN BUNESS, GERALD GABRIEL & DIETRICH ELLWANGER A precise and detailed stratigraphic interpretation of the refl ectors below 570 m depth was not possible, because these were not reached by the research borehole.Instead we intend to achieve this by using industrial refl ection seismic data, which is much denser than in the Heidelberg area. Discussion and Conclusion For the Heidelberg Basin thick sediment sequences are apparent, both in gravity and seismic data.Quantitative interpretation remain preliminary, as long as no deep boreholes can be included and 3-D models based on gravimetric and seismic data can be calculated, that additionally consider data from the hydrocarbon industry. The The Heidelberg Basin drilling project: Geophysical pre-site surveys characteristics.Therefore a consistent seismic stratigraphy for the Plio-/Pleistocene sediments of the URG is not achievable within the limits of the presently applied inconsistent stratigraphic scenarios. Concerning the location of the Viernheim boreholes, two new refl ection seismic profi les image a sedimentary environment that is affected by tectonics, as documented by several faults.But these faults are mainly restricted to older sediments, e.g. to depths greater than about 225 m (Pliocene and older according to HOSELMANN 2008).In contrast, during the Quaternary the tectonic activity seems to be quiet. Along the entire profi le these sediments are horizontal.But on a smaller scale much more detailed information is imaged by the two seismic lines indicating a complex depositional system, especially during Pliocene or even older times. Considering the results of the Viernheim research borehole, the top and the base of a fi negrained horizon, comprising the Ladenburg Horizon (Oberer Zwischenhorizont) at about 40 and 80 m, respectively, depths can be traced in the seismic section.Refl ections at a depth of about 180 m can be correlated with a sequence of regional distributed fi ne-grained sediments (Unterer Zwischenhorizont). The top of the uppermost section of high refl ectivity can be correlated with the transition from the Rhenish facies (alpine) sediments to material with local provenance at a depth of 225 m, as revealed by heavy mineral analysis, carbonate content, and petrography (HOSELMANN 2008).Whether this seismic refl ector can be interpreted as the transition Plio-/Pleistocene must be investigated by additional palynological studies that are not yet available. The most important result derived from the seismic pre-site survey at the Heidelberg UniNord location is the existence of a subbasin in the Heidelberg Basin close to the eastern margin of the Upper Rhine Graben. The additional subsidence adds up to some hundred meters with respect to deeper strata, as imaged on profi le 3, e.g. about 400 m for the top of the Hydrobia beds.All recorded profi les do not show any disconformities, faults are restricted to early Miocene (Hydrobia beds or older) units.However, this observation does not exclude potential hiatuses, which cannot be imaged by refl ection seismic data.By tying the new refl ection lines to the Schriesheim well, the transition Pliocene to Pleistocene was predicted at depths between 400 and 500 m.This statement was based on the assumption that the extrapolation of information of the industry seismic lines across the gap between the northern end of the seismic profi le 3 and the location of the Schriesheim borehole is correct.Furthermore using the interpretation of Base 'Quaternary' from a hydrocarbon well is uncertain because information is not available on the kind of data it is based on.Work done in the framework of hydrocarbon exploration was focused on deep structures rather than on young sediments.The seismic lines reveal more or less continuous subsidence.The dip of sediments towards east increases with depth, showing a classical rollover anticline structure at the eastern URG master fault.The amount of subsidence increases from north to south: the Base Pliocene dips east by about 4.5 % on profi le 2, 6 % on profi le 5 and 11 % on profi le 1 and 4.These values make a major fault zone beneath the Neckar River plausible, of which indications can be found at the southern end of profi le 3. Differential compaction may have played a role by increasing the dip of the Miocene and younger strata.The rollover structure is not found to the north, as well as to the south of the 'Heidelberger Loch', as inferred from industry seismic profi les, which affi rms it as the area with the most rapid subsidence.To aim on a deeper understanding of the basin genesis, a crucial point in the chronostratigraphic interpretation of seismic refl ectors will be the proper defi nition of 'Base Quaternary'.A major step is also expected once the new drillings will be interpreted and correlated using the tools of sequence stratigraphy (ELLWANGER et al. 2008).We are convinced that this will also include a consistent seismic scenario. Fig. 5 Fig.5: 2-D gravity profi le across the Rhine Graben in the Heidelberg area.Although the regional gravity fi eld correlates well with sediment thickness, additional density contrasts in the basement are required to explain the observed anomalies in the western part. Fig. 11 : Fig. 11: Stacked seismic refl ection profi le 2 in time domain, unmigrated.Areas with very low refl ectivity in the upper 300 ms in the central part of the profi le are due to strong noise and do not refl ect geology.A wedge-shaped zone at the eastern edge of the profi le, indicated by a dotted line, maybe caused by the transition of the sedimentary infi ll of the URG to the crystalline basement of the Odenwald.Abb.11: Gestapeltes refl exionsseismisches Profi l 2 im Zeitbereich, unmigriert.Die stellenweise sehr geringe Refl ektivität in den oberen 300 ms in zentralen Teil des Profi ls wird nicht durch die Geologie, sondern durch starke Störungen hervorgerufen.Eine keilförmige, durch eine punktierte Linie markierte Zone am Ostrand des Profi ls kann durch den Übergang von der Sedimentfüllung der ORG zum kristallinen Grundgebirge des Odenwaldes verursacht werden. Fig. 13 : Fig. 13: Migrated seismic refl ection profi le 5.No seismic information is available inside the transparent triangle for geometric reasons. Fig. 16 : Fig. 16: Migrated seismic refl ection profi le 4. No seismic information is available inside the transparent triangle for geometric reasons.The strongly west-dipping refl ectors close to the triangle zone are probably artefacts caused by migration. Table 2 : Recording parameters used in the refl ection seismic surveys. Table 3 : Typical data processing fl ow Table 2 Due to the thick sequences of Plio-/Pleistocene sediments, the research boreholes Heidelberg UniNord reveals a unique high temporal resolution.From the fi rst Heidelberg borehole in 2006 a time marker of the Waalian stage is reported at 183 m -210 m depth (HAHNE, ELLWANGER & STRITZKE 2008).As the age of the Waalian is considered to amount to 1.5 Ma, and assuming continuous sedimentation, the Top of the Pliocene at 2.6 Ma has to be expected at only 365 m depth.However, the preliminary interpretation of the new refl ection seismic HERMANN BUNESS, GERALD GABRIEL & DIETRICH ELLWANGER profi les reveals Base 'Quaternary' fi rst at about 430 m depth.Consequently, concerning the early Quaternary, increased sedimentation rates can be expected.
8,785
2009-04-01T00:00:00.000
[ "Geology", "Environmental Science" ]
Organizational topology of brain and its relationship to ADHD in adolescents with d‐transposition of the great arteries Abstract Objective Little is currently known about the impact of congenital heart disease (CHD) on the organization of large‐scale brain networks in relation to neurobehavioral outcome. We investigated whether CHD might impact ADHD symptoms via changes in brain structural network topology in a cohort of adolescents with d‐transposition of the great arteries (d‐TGA) repaired with the arterial switch operation in early infancy and referent subjects. We also explored whether these effects might be modified by apolipoprotein E (APOE) genotype, as the APOE ε2 allele has been associated with worse neurodevelopmental outcomes after repair of d‐TGA in infancy. Methods We applied graph analysis techniques to diffusion tensor imaging (DTI) data obtained from 47 d‐TGA adolescents and 29 healthy referents to construct measures of structural topology at the global and regional levels. We developed statistical mediation models revealing the respective contributions of d‐TGA, APOE genotype, and structural network topology on ADHD outcome as measured by the Connors ADHD/DSM‐IV Scales (CADS). Results Changes in overall network connectivity, integration, and segregation mediated worse ADHD outcomes in d‐TGA patients compared to healthy referents; these changes were predominantly in the left and right intrahemispheric regional subnetworks. Exploratory analysis revealed that network topology also mediated detrimental effects of the APOE ε4 allele but improved neurobehavioral outcomes for the APOE ε2 allele. Conclusion Our results suggest that disruption of organization of large‐scale networks may contribute to neurobehavioral dysfunction in adolescents with CHD and that this effect may interact with APOE genotype. Introduction Congenital heart disease (CHD) is very common and affects 8 per 1000 live births (van der Linde et al. 2011). Approximately one-third of affected children require intervention in early infancy. Improved survival has revealed that neurodevelopmental disability is the most common long-term complication of CHD. Neurocognitive deficits have been documented in executive function, attention, visual-spatial processing, and memory (Bellinger et al. 2011;Rollins et al. 2014). Structural brain abnormalities beginning in utero and extending into the postoperative period have also been shown, including reduced brain volume and increased risk of white matter injury (Limperopoulos et al. 2010;Donofrio et al. 2011;Lynch et al. 2014). A recent study (Panigrahy et al. 2015) has shown that neurocognitive deficits in adolescents with dextro-transposition of the great arteries (d-TGA) repaired in early infancy are driven by differences in global white matter structural network topology, as determined via diffusion tensor imaging (DTI), graph analysis, and statistical mediation models. DTI enables investigation of white matter microstructure as well as connectivity between gray matter regions. Graph analysis involves a systems-level approach to modeling the brain (the "connectome") and the subsequent investigation of brain network topology. Differences in network topology have been seen in a variety of neurobehavioral disorders including schizophrenia and autism (Li et al. 2014;Griffa et al. 2015). Mediation models are useful for investigation of what brain differences underlie neurocognitive or neurobehavioral differences (Hayes 2009). Topological structural differences may constitute powerful mediators of neurocognitive outcome and may represent potent biomarkers for not only neurocognitive outcome but also for therapy to optimize it. Since children with CHD have greater likelihood of ADHD (Shillingford et al. 2008), we hypothesized that differences in brain structural topology may underlie this diagnosis as well. The risk for ADHD children with complex CHD may, in fact, be 3-4 times higher as in the general population (Shillingford et al. 2008), making it important to understand the neurophysiological etiology. As an exploratory analysis, we wished to investigate whether different alleles of apolipoprotein E (APOE) interact with d-TGA and modulate ADHD; this interaction may also be mediated via changes in brain network topology. ApoE-containing lipoproteins are the primary lipid transport vehicles in the CNS and have an important role in mobilization and redistribution of cholesterol and phospholipids during remodeling of neuronal membranes (Laskowitz et al. 1998). We previously identified and validated an association of APOE genotype with early neurodevelopmental outcomes after cardiac surgery in neonates and infants with CHD (Gaynor et al. 2003(Gaynor et al. , 2009(Gaynor et al. , 2014. The APOE e2 allele (described in Materials and Methods) was associated with worse performance at 12-14 months of age after surgery and with an increased risk of behavior problems at 4 years of age, while the APOE e4 allele was associated with better outcomes. Thus, we used graph analysis techniques on DTI data acquired from adolescents with d-TGA corrected in early infancy and prospectively enrolled in the Boston Circulatory Arrest Study (BCAS). Our metrics, which reflect network connectivity, segregation, integration, and the balance between integration and segregation, were included in statistical mediation models with d-TGA status or APOE genotype as the independent variable and ADHD outcome as the dependent variable. Materials and Methods Details of the study population, APOE genotyping, the MRI acquisition details, the graph analysis technique, and statistical mediation models are available from previously published work (Bellinger et al. 1995(Bellinger et al. , 2011Tardiff et al. 1997;Gaynor et al. 2003;Rivkin et al. 2013;Rollins et al. 2014;Panigrahy et al. 2015) and are only summarized here. Participants Study participants with d-TGA were taken from the BCAS; healthy referent adolescents met standard criteria from the NIH MRI study of normal brain development. This study was approved by the Boston Children's Hospital Institutional Review Board and adhered to both institutional guidelines and the Declaration of Helsinki. Parents provided written informed consent, and adolescents provided assent. APOE Genotyping APOE genotype was determined according to singlenucleotide polymorphisms that specify a cysteine and/or an arginine residue at codons 112 and 158 of the APOE gene. The e2 allele consists of a cysteine at positions 112 and 158; e3 consists of a cysteine at position 112 and an arginine at position 158; e4 consists of an arginine at both positions. DTI acquisitions and preprocessing DTI data were obtained from a GE Twin 1.5T system (General Electric, Milwaukee, WI) at Boston Children's Hospital with b = 750 s/mm 2 and six diffusion-encoding directions. Standard preprocessing was used with tools in FSL (FMRID, RRID: SCR_002823, http://www.fmrib.ox.ac.uk/fsl/). Deterministic tractography was performed using in-house software written in IDL (http://www.ittvis.com, Boulder, CO). DTI data were segmented into 76 anatomical regions by applying the automated anatomic labeling (AAL) template (Tzourio-Mazoyer et al. 2002) transformed into native space (Fig. 1). Adjacency matrices (unweighted graphs) were computed, with each off-diagonal element containing 1 if at least one streamline connected the two regions, or 0 if not. Graph analysis Graph metrics (global efficiency, modularity, transitivity, and small-worldness) (Rubinov and Sporns 2010) were computed via the C++ modules available from the Brain Connectivity Toolbox (BCT; RRID:SCR_004841, http:// www.brain-connectivity-toolbox.net). A brief summary of each graph metric is provided here; we refer the reader to (Rubinov and Sporns 2010) for a more detailed description. Cost The ratio of connections in the graph to the total possible number of connections. Global efficiency For each pair of nodes, the efficiency is the reciprocal of the shortest path length between those nodes (if the nodes are disconnected, the efficiency is zero). Global efficiency is the efficiency averaged over all pairs of nodes. Modularity The graph is segregated into subnetworks and the modularity measures the connections within subnetworks as compared to the connections between subnetworks. The Louvain algorithm (Blondel et al. 2008) is used to optimize the subnetwork assignments. Transitivity Transitivity is the ratio of the actual number of "triangles" in the graph to the possible number of "triangles" (e.g., if node A is connected to node B, and node A is connected to node C, is node B connected to node C). Small-worldness A "small-world" network has similar global efficiency, but much greater transitivity, when compared to a random graph with the same degree distribution. The smallworldness parameter is defined as the ratio of global efficiency in the graph to the global efficiency of a random graph, multiplied by the ratio of transitivity in the graph to the transitivity of a random graph. Neuropsychological testing Neuropsychological test scores of adolescents with d-TGA and healthy referent adolescents were reported previously in detail (Bellinger et al. 2011). A previous publication (Panigrahy et al. 2015) focused on relations between d-TGA, structural network topology, and neurocognitive outcomes (e.g., WISC-IQ, WIAT scores, memory tests, etc.) Here we focus on inattentiveness and hyperactivity. We used the Connors ADHD/DSM-IV Scales (CADS) questionnaire administered to parents, teachers, and subjects. This questionnaire yields three scores linked to DSM-IV criteria for ADHD: Hyperactive/Impulsive, Inattentive, and Combined. In addition, it yields an ADHD Index score based on the 12 items of the questionnaire that best distinguish children who have a diagnosis of ADHD from children who do not. Thus, 12 scores were obtained for each subject (4 scores obtained from parent, teacher, and subject each) in which higher score indicates worse performance. Mediation analysis Statistical mediation models incorporated d-TGA status or APOE genotype as the independent variable, graph metrics as the mediating variable, and (each of the 12) CADS scores as the dependent variable. When APOE genotype was the independent variable, comparisons were performed between d-TGA subjects homozygous for the e3 allele and those heterozygous for either the e2 or e4 allele and the e3 allele. Additionally, the total effect (which is a simple linear regression model with d-TGA status or APOE allele the independent variable and CADS score the dependent variable) was computed for d-TGA and APOE allele. For all analyses, age, sex, and square root of acquired DTI frames were included as covariates. Bootstrapping (25,000 iterations) was used to test the mediation results for statistical significance. Results were deemed significant at a False Discovery Rate (FDR) corrected q < 0.05 (corrected for multiple comparisons across the 12 scores obtained from the CADS). In addition to investigating global structural network topology, we performed regional (subnetwork) analyses. The graphs were averaged over all participants and, using the Louvain modularity algorithm (Blondel et al. 2008), regional modules (subnetworks) were found consisting of frontal interhemispheric, posterior interhemispheric, left intrahemispheric, and right intrahemispheric regions (Fig. 2). Using BCT, nodal metrics (degree, nodal efficiency, clustering coefficient, participation coefficient) were computed and averaged over all nodes in each subnetwork. Degree was further separated into intramodular degree (i.e., connections to nodes within the same subnetwork) and intermodular degree (i.e., connections to nodes in different subnetworks). (We again refer the reader to (Rubinov and Sporns 2010) for a detailed description of these metrics.) Clinical trial participants Data from 47 adolescents with d-TGA and 29 referent subjects were included in the final analysis (Rivkin et al. 2013). The scans of an additional 33 adolescents with d-TGA and 11 referent subjects were excluded from analysis due to unacceptable signal artifact. Demographic characteristics (gestational age, sex, and age at MRI) in the excluded subjects did not differ from those of subjects included in the final analysis. Adolescents with d-TGA, compared to referent subjects, were older at MRI, more likely to be male, and lower in social class, but had similar gestational age. Of the d-TGA patients, 31 were homozygotes for the e3 allele, five were e3-e4 heterozygotes, nine were e2-e3 heterozygotes, and two were e2-e4 heterozygotes (these two were not included in the APOE genotype analysis). Analysis of total effect Subjects with d-TGA, compared to referent subjects, demonstrated worse scores (FDR corrected q < 0.05) on two CADS tests (Parent report: combined, inattentive); significance was not reached for the other 10 tests. Significance was not reached for d-TGA subjects with the APOE e2 allele or the e4 allele on any test. Analysis of indirect effect In the following sections, we present significant results from the mediation analyses (analyses of indirect effect). We remind the reader that an indirect effect (e.g., better/ worse CADS scores in d-TGA adolescents mediated by differences in a network topology metric) does not necessarily imply a significant total effect. In addition to the specific metric, we list the specific CADS tests for which significance was reached (Adolescent, parent, or teacher report: and then the specific subtest(s), whether combined, hyperactive/impulsive, total, or inattentive). Significant indirect effects (FDR q < 0.05) were also seen in the frontal interhemispheric network (Fig. 5). Worse scores were modulated by intramodular degree (Parent report: hyperactive/impulsive, total). Interestingly, better (lower) scores were mediated by participation coefficient (Adolescent report: hyperactive/impulsive, total; Teacher ApoE alleles. Results from the mediation analyses demonstrate different paths by which regional topology differences (in the left and right intrahemispheric networks) mediate CADS scores for d-TGA adolescents in general, and for d-TGA adolescents with the e2 or e4 ApoE alleles. Arrows from the topology metrics to the CADS scores indicate a significant indirect effect (FDR corrected for multiple comparisons with q < 0.05; red = positive, blue = negative). Arrows from the independent variables to the topology metrics indicate difference in topology metrics (red = positive, blue = negative, gray = difference not significant). report: all subtests). Participation coefficient (in addition to clustering coefficient) also reflects network segregation. Neurobehavioral mediation analysis: effect of APOE e4 genotype Our exploratory aim was to determine whether the differences in network topology mediated differences in ADHD functioning in d-TGA adolescents heterozygous for either the e2 or the e4 allele as compared to those homozygous for the e3 allele. At the global level (Fig. 3), worse scores in d-TGA adolescents with the e4 allele were mediated by differences in: cost (Teacher report: combined, inattentive, total), global efficiency (Teacher report: combined, total), modularity (Teacher report: all tests), and smallworldness (Teacher report: inattentive). (Small-worldness is a metric representing the balance between integration and segregation.) ApoE alleles. Results from the mediation analyses demonstrate different paths by which regional topology differences (in the frontal and posterior interhemispheric networks) mediate CADS scores for d-TGA adolescents in general, and for d-TGA adolescents with the e2 or e4 ApoE alleles. Arrows from the topology metrics to the CADS scores indicate a significant indirect effect (FDR corrected for multiple comparisons with q < 0.05; red = positive, blue = negative). Arrows from the independent variables to the topology metrics indicate difference in topology metrics (red = positive, blue = negative, gray = difference not significant). Brain and Behavior, doi: 10.1002/brb3.504 (7 of 12) At the regional level, worse scores were mediated (FDR q < 0.05) by topology differences in all subnetworks (Figs. 4,5). Right intrahemispheric: worse scores were mediated by nodal efficiency (Teacher report: total). Left intrahemispheric: worse scores were mediated by nodal efficiency (Teacher report: combined, hyperactive/impulsive, total), and degrees (Adolescent report: combined, inattentive; Parent report: combined, inattentive, total; Teacher report: combined, total). Posterior interhemispheric: worse scores were mediated by nodal efficiency (Teacher report: all subtests), degrees (Parent report: all subtests; Teacher report: combined, hyperactive/impulsive, total), and intramodular degree (Parent report: hyperactive/impulse). Frontal interhemispheric: worse scores were mediated by nodal efficiency (Teacher report: total). Interestingly, in the posterior interhemispheric subnetwork, better scores were also mediated by intramodular degree (Adolescent report: all tests). Discussion Previous work has demonstrated that diminished cognitive function (including overall intelligence, memory, and executive function) in adolescents with d-TGA relates to white matter microstructure (Rollins et al. 2014) and is mediated by global differences in white matter structural network topology (Panigrahy et al. 2015). These findings suggest that alteration of large-scale network organization can adversely affect cognitive dysfunction in children with surgically treated CHD. We now report that scores on CADS ADHD tests are also mediated by white matter global and regional topologic differences. Additionally, our exploratory analysis found that APOE genotype exerted an effect on structural topology and CADS scores in these adolescents. We found worse scores on two CADS subtests (Parent report: combined, inattentive) in the subset of d-TGA adolescents for which DTI data were acquired, in agreement with DeMaso et al. (2014) and Bellinger et al. (2011) who earlier showed worse scores in Parent-report CADS index scores in the entire BCAS cohort. Our results are also in agreement with Shillingford et al. (2008) who demonstrated higher prevalence of ADHD in school-aged (5-10 years old) children with CHD. Interestingly, as was the case in our study, this higher prevalence was more pronounced using the parent-report scores as compared with the teacher-report scores. The reason for this is unknown, but may be related to teachers having a higher threshold for classifying a behavior as abnormal. Taken together, these results indicate that vulnerability for ADHD is present in the CHD cohort throughout childhood and adolescence. However, statistical significance was not reached for the analyses regarding APOE genotype. While this may be due to insufficient power, an effect of APOE genotype on ADHD outcome in general has not been found (see (Gatt et al. 2015) for meta-analyses), although these studies were conducted on healthy referents and may not be transferable to CHD populations. A previous study (Gaynor et al. 2009) also did not find a significant effect regarding ADHD specifically. However, this study was conducted using children of preschool age who were much younger than those of this study. We also found that the worse ADHD outcomes in d-TGA adolescents are mediated by global structural topology differences (cost, global efficiency, and transitivity). The topology differences involving network segregation (transitivity) differed, however, from those (modularity and small-worldness) found to mediate poorer neurocognitive outcomes (Panigrahy et al. 2015). This suggests that different structural features may underlie cognitive as compared to behavioral deficits such as inattentiveness and hyperactivity in d-TGA subjects. Our preliminary evidence here also suggests an influence of APOE genotype on structural topology and ADHD outcome. Better ADHD outcome in d-TGA adolescents with the e2 allele was mediated by differences in global network topology (cost, global efficiency, and transitivity). By contrast, in d-TGA adolescents with the e4 allele, worse ADHD outcome was mediated by a different set of global metrics (cost, global efficiency, modularity, Brain and Behavior, doi: 10.1002/brb3.504 (8 of 12) and small-worldness). Interestingly, these results in adolescents are opposite to what was found (Gaynor et al. 2009) in CHD patients of preschool age (i.e., the e2 allele is deleterious while the e4 allele is neuroprotective). Our results indicate that network integration, as reflected by global efficiency, mediates a variety of neurobehavioral outcomes, including the worse ADHD outcomes seen in this study as well as worse neurocognitive function seen in our previous report (Panigrahy et al. 2015). With regard to network segregation, however, transitivity was found to mediate worse ADHD outcome, in contrast to small-worldness and modularity mediating worse neurocognitive outcome. Modularity is a measure representing network segregation at the regional (i.e., subnetwork) level; a more modular network has a greater proportion of intra-subnetwork connections. Transitivity, on the other hand, represents network segregation at the nodal level and represents the proportion of "triangles" (e.g., if node A is connected to node B and node B is connected to node C, node A is also connected to node C). Small-worldness is related to the ratio of transitivity to global efficiency and is often interpreted as the "balance" between network integration and segregation (Watts and Strogatz 1998). Thus, the increased modularity underlying poorer neurocognitive performance may indicate a deficit in long-range connectivity between subnetworks, as efficient communication between subnetworks is necessary for complex cognitive tasks (Li et al. 2009). By contrast, the altered transitivity associated with worse ADHD may be more associated with short-range and local connectivity. This hypothesis is supported by DTI studies (Davenport et al. 2010;van Ewijk et al. 2014) showing uniquely higher FA in ADHD adolescents (compared to adolescents with schizophrenia) in the left and right anterior corona radiata, and higher FA in widespread white matter regions associated with short-range connectivity. Our results from our exploratory analysis also suggest an important role of the ApoE genotype in brain development and recovery after injury. However, the genotype effect identified in this study (the e2 allele associated with better outcomes and the e4 allele with worse outcomes) is opposite to that seen in previous studies involving younger age ranges (Gaynor et al. 2003(Gaynor et al. , 2009). Thus, our results suggest antagonistic pleiotropy, which occurs when genes control for some beneficial traits and some deleterious traits. An example is, when a gene is associated with a beneficial effect early in development, thereby enhancing selection and survival, but has a more deleterious effect later in life, resulting in the development of disease (Han and Bondi 2008;Rusted et al. 2013). In the normal population, the e4 allele is neuroprotective early in development, and has been associated with a decreased risk of spontaneous abortion (Zetterberg et al. 2002) and improved neurodevelopmental outcomes in infants with malnutrition (Oria et al. 2007(Oria et al. , 2010 or lead exposure (Wright et al. 2003). Thus, improved early neurodevelopmental outcomes after cardiac surgery may be the result of greater neuroprotection against the effects of hypoxicischemic injury occurring during the third trimester or perinatally. However, the e4 allele may interact in a deleterious manner with other risk factors later in development, in a similar manner as seen in normal adults regarding increased risk for AD and worse outcome after TBI (Shu et al. 2014;Tsao et al. 2014). The exact timing of this switchover remains unclear as do its environmental and genetic determinants. Similarly, the e2 allele, while interacting deleteriously with injury caused by CHD pre-or perinatally, may provide an advantage later in development. Interestingly, the same metrics (cost, global efficiency, and transitivity) which mediate better ADHD outcomes for d-TGA adolescents with the e2 allele mediate worse ADHD outcomes in d-TGA adolescents in general; we speculate that the e2 allele counteracts the general effect of d-TGA on the developing brain. Further research will be necessary to investigate this hypothesis further. In this study, we went beyond global metrics to also look at regional topology differences. The changes in topology mediating worse CADS scores in d-TGA adolescents were seen to originate predominantly from the left and right intrahemispheric networks (primarily the left), and in the nodal metrics (degree, nodal efficiency, clustering coefficient) corresponding to those previously seen at the global level (cost, global efficiency, transitivity). In the frontal interhemispheric network, however, while intramodular degree mediated worse CADS scores, participation coefficient mediated better scores. Intramodular and intermodular degrees were significantly lower in d-TGA patients, while participation coefficient was reduced at a trend level. Based on these results, we hypothesize that d-TGA results in impaired frontal-frontal (including interhemispheric) connectivity adversely affects CADS scores. However, a partial compensatory mechanism is available via reduced connectivity to other regions of the brain. Interestingly, the previously cited study (van Ewijk et al. 2014) showed decreased FA in frontal regions in children with a diagnosis of ADHD, but a positive correlation of FA with ADHD symptoms in those children. The decreased overall FA may correspond to the impaired frontal-frontal connectivity found here, while the positive correlation of FA with ADHD symptoms may correspond to the reduced connectivity to the other regions, resulting in less crossing fibers. Regarding the e2 and e4 alleles, changes in regional topology mediating better ADHD outcomes were seen ª 2016 The Authors. Brain and Behavior published by Wiley Periodicals, Inc. Brain and Behavior, doi: 10.1002/brb3.504 (9 of 12) throughout the brain, indicating that the effects of these alleles may not be regionally specific. Interestingly, topology metrics in the posterior interhemispheric networks for the e4 allele mediated worse scores in parent and teacher-report scores but better scores for the self-report scores. The correlation between parent, teacher, and adolescent CADS scores is quite weak overall (Kaner 2011), though with better correlations between parent teacherreport scores, and the reliability of ADHD adolescent selfreports for negative behaviors is questionable (Smith et al. 2000). Thus, we find it premature to speculate on the possible significance of these findings. Our results regarding the effect of the e2 or e4 allele should be taken as preliminary due to the small number of participants heterozygous for each allele, and the consequent failure to control for clinical and perioperative variables. As a result of the small sample size, statistical significance for the indirect effect was sometimes reached in the absence of statistical significance for the total effect or either of the two individual pathways. Nevertheless, the preliminary evidence for antagonistic pleiotropy in these alleles is quite intriguing and awaits replication in a study with a larger sample size. Additional limitations include use of a DTI sequence with six diffusion-encoding gradient directions, the minimum necessary to compute the diffusion tensor. While greater SNR is available when more directions are used (Jones 2004), six directions provide comparable robustness for computation of parameters from deterministic tractography (Lebel et al. 2012). This method has been used to show differences in graph metrics (Shu et al. 2011) in patients with multiple sclerosis. However, there may exist even more relevant differences in brain network topology than were detected in our study. Future research may incorporate diffusion spectrum imaging (DSI) or Q-ball imaging, which use a much larger number of directions. Conclusion White matter structural network topology (including integration and segregation) is shown to mediate worse ADHD outcomes in adolescents with d-TGA. The segregation metric (transitivity) is distinct from metrics of segregation (modularity, small-worldness) previously found to mediate worse neurocognitive performance. Opposite to what was observed in CHD patients at younger ages, in this cohort, our exploratory analysis revealed that better outcome was mediated by structural topology in adolescents with the APOE e2 allele while worse outcome was mediated in adolescents with the APOE e4 allele. These results suggest an early switchover for the effects of antagonistic pleiotropy in individuals with complex CHD.
5,901
2016-06-09T00:00:00.000
[ "Psychology", "Biology" ]
Indexing Exoplanets with Physical Conditions Potentially Suitable for Rock-Dependent Extremophiles The search for different life forms elsewhere in the universe is a fascinating area of research in astrophysics and astrobiology. Currently, according to the NASA Exoplanet Archive database, 3876 exoplanets have been discovered. The Earth Similarity Index (ESI) is defined as the geometric mean of radius, density, escape velocity, and surface temperature and ranges from 0 (dissimilar to Earth) to 1 (similar to Earth). The ESI was created to index exoplanets on the basis of their similarity to Earth. In this paper, we examined rocky exoplanets whose physical conditions are potentially suitable for the survival of rock-dependent extremophiles, such as the cyanobacteria Chroococcidiopsis and the lichen Acarospora. The Rock Similarity Index (RSI) is first introduced and then applied to 1659 rocky exoplanets. The RSI represents a measure for Earth-like planets on which physical conditions are potentially suitable for rocky extremophiles that can survive in Earth-like extreme habitats (i.e., hot deserts and cold, frozen lands). Introduction In recent years, extraterrestrial research has become the 'holy grail' of astrobiology. Space missions like CoRoT (Convection, Rotation and planetary Transits) and Kepler have provided a huge amount of data from exoplanetary observations which are catalogued in the Planetary Habitability Laboratory, (PHL-EC, University of Puerto Rico (UPR), Arecibo, 2017, http://phl.upr.edu/projects/habitable-exoplanets-catalogue/data/database) [1]. The PHL-EC data (as of 2018) for different planetary objects, such as radius, density, escape velocity, and surface temperature, have been used to create a metric index called the Earth Similarity Index (ESI) that ranges from 0 (dissimilar to Earth) to 1 (identical to Earth) [2]. The ESI allows Earth-like and potentially habitable planets (PHPs) to be identified on the basis of the observed physical parameters of extra-solar objects. Exoplanets can be divided into rocky planets of different sizes and gas giants. The masses of rocky planets range from 0.1 to 10 Earth masses, while the radii range from 0.5 to 2 Earth radii [3]. Recently, Kashyap et al. [4] introduced a new technique to estimate the surface temperature of different exoplanets and formulated the Mars Similarity Index (MSI) for the search for extremophilic life forms which are capable of survival in Mars-like conditions. In 2018, Kashyap et al. [5] introduced two additional indexes: the Active Tardigrade Index (ATI) and the Cryptobiotic Tardigrade Index (CTI). Both the ATI and CTI were designed to catalogue exoplanets according to the potential survivability of extremophilic invertebrates (e.g., Tardigrada (water bears)) on their surfaces. The ATI and CTI are defined as the geometric mean of radius, density, escape velocity, surface temperature, surface pressure, and revolution, in a range from 0 to 1. This paper focuses on rocky exoplanets with Earth-like conditions and surface temperatures varying within a range potentially suitable for growth and reproduction of extremophilic microorganisms. Extremophiles are organisms which are able to survive extreme physical or geochemical conditions that are lethal, or at least harmful, to most organisms on Earth [6]. These organisms can be found in all kingdoms of life, but most of them belong to Bacteria and Archaea. In addition, such organisms can also be found among animals, fungi, and plants. The organisms considered to be the most tolerant include fungi, lichens, algae, tardigrades, rotifers, nematodes, and some insects and crustaceans [7][8][9][10][11][12][13][14]. This paper focuses on two extremophiles growing on rocks: the cyanobacteria Chroococcidiopsis and the lichen Acarospora [15][16]. Chroococcidiopsis is a photosynthetic primitive cyanobacteria growing on and below rocks and characterized by a high potential to colonization and recolonization of extreme habitats [17]. Chroococcidiopsis is known for its tolerance of harsh conditions, including high and low temperatures, ionising radiation, and high salinity [18]. Verseux et al. [19] proposed that Chroococcidiopsis is an organism capable of living on Mars and potentially capable of terraforming the red planet. Additionally, Chroococcidiopsis was used in tests involving low Earth orbit, impact events, planetary ejection, atmospheric re-entry, and simulated Martian conditions [20][21][22][23]. Acarospora species are crustose lichens inhabiting xerothermic habitats that grow on dry rocks [24]and tolerate harsh conditions such as low and high temperatures, high radiation, or lack of water [15,25]. Research has shown that two Acarospora species are capable of survival in a simulated Martian environment [26]. This paper introduces the Rock Similarity Index (RSI) and calculates RSI for 1659 rocky-iron exoplanets. The RSI is similar to the ATI and CTI (as calculated in [5]), yet differs in that the surface temperature parameter is modified to reflect the potential survivability of rock-dependent extremophiles. Weight exponent calculation of Mars where the threshold value is: Results The RSI is designed to index Earth-like planets with physical conditions which, though harsh, are at least potentially suitable for rock-dependent extremophiles such as Chroococcidiopsis and Acarospora. According to Mckay [27], generally speaking, the temperature range in which extremophilic microorganisms are able to reproduce and grow is between 258 K and 395 K. With regard to the calculation of the RSI, the corresponding weight exponent for surface temperature was calculated to be 2.26. We calculated the RSI average weight exponents for rocky exoplanets, as shown in Table 1. The weight exponents for the upper and lower limits appeared similar to the tardigrade indexes of Kashyap et al. [5], with the exception of surface temperature. In order to calculate the surface temperature of the studied exoplanets, the albedo 0.3 (similar to that on Earth) was applied as a proxy (e.g., as seen in Table 2, for Proxima Cen b the effective temperature was 229.3 K, and the surface temperature was 263.9 K). In order to calculate the weight exponent, the following ranges were used for the upper and lower limits of each parameter: mean radius = 0.5-1.9 EU; bulk density = 0.7-1.5 EU; escape velocity = 0.4-1.4 EU; surface temperature T = 258-395 K; and revolution = 0.61-1.88 EU. The weight exponents were calculated by applying these limits in the weight exponent equation previously proposed [5]. The RSI for rock-dependent extremophiles is defined as the geometrical mean of radius, density, escape velocity, and surface temperature of exoplanets, in a range from 0 to 1, where 0 indicates nonsurvival, and 1 represents survival. Mathematically, where RSIR, RSI, RSITs, RSIVe, RSIrev, and RSIp represent the RSI values of radius, density, surface temperature, escape velocity, revolution (Earth years), and pressure, respectively. The RSI of each physical parameter is defined similarly to the ESI and is given by: where x represents a physical parameter of the exoplanet (radius R, bulk density ρ, escape velocity Ve, surface temperature Ts, pressure p, or revolution rev), x0 denotes the reference value for Earth, and wx is the weight exponent, as seen in Table 1. Most parameters are expressed in EU (Earth units), while the surface temperature is given in Kelvin (K). The global RSI is divided into interior (RSII) and surface (RSIS), which are expressed as: Therefore, the global RSI is defined as The RSI values are computed from Equations 2-5 using data from [4] for the radius, density, escape velocity, surface temperature, revolution, and pressure, together with the surface temperature weight exponent value of 2.26. A representative sample is shown in Table 2; the entire table is catalogued and made available online (see [28]). A graphical representation of rocky planets characterized according to the RSI is presented in Figure 1. The threshold (a limit for potential microorganisms survival) for rocky exoplanets that are considered to be potentially habitable by extremophiles such as Chroococcidiopsis and Acarospora is defined by considering Mars (on which this forms of life are able to survive [20,26]) that has an RSI of ~0.82 (for details see also calculations above). Discussion and Conclusions The search for extraterrestrial life forms has given rise to numerous space missions that have enabled researchers to collect data, test different species of extremophiles (e.g., black fungi, cyanobacteria, bryophytes, invertebrates) in space conditions, analyse their physiology [29] in extreme conditions, and finally find potentially habitable exoplanets for Earth-like organisms. Space missions which previously studied extremophiles include EXPOSE-E, EXPOSE-R2, BIOMEX, and CoRoT [30]. Up to now, Earth is the only known rocky planet which both has a developed biosphere and is shielded by a magnetic field that protects it against harmful cosmic radiation [31]. In this analysis, we focused on rocky exoplanets which have physical conditions similar to those of Earth or Mars. We chose two microorganisms, Chroococcidiopsis and Acarospora, that are able to survive, grow, and reproduce in very harsh conditions and in the absence of a planet's magnetic field. Chroococcidiopsiswas previously selected for colonizing tests on Mars (Russian Expose Mission) because it can grow on rocks, produces oxygen, and tolerates high energy cosmic radiation [32]. Similarly, Acarospora was tested by the EXPOSE-E mission for one and a half years and managed to survive in Mars-like conditions [24]. According to Kashyap et al. [4], Mars, with an ESI value of 0.73, was defined as the limit for planets which could have physical conditions suitable for complex life forms. Based on this criterion, approximately 44 planets have been identified as PHPs. Considering the RSI for 1659 rocky exoplanets with a threshold of 0.82, 21 exoplanets have been found to be PHPs, where physical conditions are suitable for extremophiles such as Chroococcidiopsis and Acarospora. A very important factor in our analysis is the calculation of the weight exponent for surface temperature. The weight exponents used for each physical factor allow an accurate calculation of the ESI and RSI, so it is crucial to have the correct weight exponent. For the calculation of the RSI, a temperature limit range from 273K to 373K was used [27], and the corresponding weight exponent for surface temperature was calculated to be 2.26. This value corresponds to the conditions which are potentially suitable for rock-dwelling extremophiles to survive. Subsequent space missions, such as the James Webb Space Telescope, will provide deeper insights into potentially habitable planets and their environments. Once the data from these missions have been combined with detailed knowledge on environmental conditions where extremophiles are potentially able to survive, it will be possible to identify potential physical and chemical parameters which should be present on exoplanets or exomoons to be suitable for Earth-like organisms. The RSI proposed by us is a tool which indexes planets that have physical conditions potentially suitable for certain Earth microorganisms. While it is obvious that our index does not provide definitive answers, it does enable us to identify the best candidate exoplanets or exomoons to be chosen for both further research and searches for extraterrestrial life signatures.
2,429
2020-01-26T00:00:00.000
[ "Geology", "Physics" ]
Influence of rapid laser heating on differently matured soot with double-pulse laser-induced incandescence Abstract For accurate laser-induced incandescence (LII) measurements of soot properties it is of great importance to understand the nature of the physical processes involved during rapid laser heating. In this work, we investigate how well-characterized differently matured fresh soot from a soot generator responds to rapid laser heating. For this purpose, a double-pulse LII setup is used with 10 μs time separation between the pulses using various combinations of two common LII wavelengths (532 and 1064 nm). Detection is performed at two wavelength bands for fluorescence analysis, and additionally elastic light scattering is used for mass loss analysis during heating. We investigate how the LII signal changes with pre-heating laser energy, specifically by fluence curve analysis to estimate the influence of thermal annealing, sublimation and laser-induced fluorescence interference. It is shown that extensive absorption enhancement occurs for all types of soot as the soot is thermally annealed, which is manifested through decreasing dispersion coefficient ξ and an increasing absorption coefficient E(m,λ). When comparing young and mature soot, a much larger impact of sublimation can be observed in the fluence curves of the mature soot. Also, we observe an enhanced contribution of laser-induced fluorescence for the young soot when performing LII measurements using 532 nm, which is suggested to originate from vaporized carbon fragments with an aromatic structure. This work further shows the potential of utilizing double-pulse arrangements for increasing the detectability of poorly absorbing soot, but also it highlights the impact of laser heating on soot, which may be important to avoid interferences when performing soot diagnostics. Introduction Throughout the formation and maturation process of soot in a combustion environment, its optical and physicochemical properties change drastically (L opez-Yglesias, Schrader, and Michelsen 2014;Apicella et al. 2015;Johansson et al. 2017;Michelsen 2017). Hence, when soot is rapidly cooled and emitted from the combustion process, its properties will depend on residence time, temperature, and the chemical environment where it was formed. The optical and physicochemical properties of the fresh soot may range from small poorly absorbing soot particles with a high wavelength dependence (Simonsson et al. 2015;T€ or€ ok et al. 2018;Michelsen et al. 2020) and an disordered nanostructure with low C/H ratio of about 1.4-2.5 (Michelsen et al. 2020), to large dendritic shaped aggregates including highly ordered graphitic structures and a C/H ratio of 8-20 (see (Michelsen et al. 2020) and references therein) and of near black-body optical properties. The understanding of this wide spread of properties from different combustion processes and its relation to the maturity of soot is crucial, as various types of soot may impact climate (Bond et al. 2013, IPCC 2013 and human health (Janssen et al. 2012) differently. Also, difficulties may arise as diagnostic techniques for soot detection could respond differently to soot of different maturity. Laser-induced incandescence (LII) is a common optical technique for soot diagnostics which can provide quantitative and/or qualitative measurements of different soot properties (Michelsen et al. 2015). The physical principle of the technique is based on heating of soot with high energy laser radiation until it reaches temperatures of 3000-4000 K, and the subsequent detection of the thermal radiation of the soot in the visible and near infrared spectrum. This enhanced thermal radiation is the LII signal. By investigating how the laser-soot interaction behaves as a function of laser energy, wavelength and time, one may obtain information about soot volume fractions, primary particle sizes, and absorption wavelength dependence. But as soot particles are heated to LII temperatures using a short laser pulse, the soot properties will to some extent be altered. The main processes which are related to the altering of soot and of the soot aerosol are (1) sublimation and vaporization, as carbon fragments and other species may be emitted from the soot surface (Olofsson et al. 2015), (2) thermal annealing, which may induce changes in the soot nanostructure and enhance the structural order with larger graphitic layers with less defects (Vander Wal, Ticich, and Stephens 1998;Vander Wal and Choi 1999;Apicella et al. 2019), and (3) new particle formation as the vaporized fragments can nucleate and form small particles with amorphous character (Michelsen et al. 2007, Migliorini et al. 2020. In order to perform optimized LII measurements on soot of different maturity, it is crucial to understand how these processes depend on the maturity of the soot. Extensive work was done by Vander Wal and coworkers (Vander Wal, Ticich, and Stephens 1998;Vander Wal and Choi 1999) in the late 90s to understand the influence of rapid laser heating on soot. By performing double-pulse LII experiments followed by transmission electron microscopy (TEM) on sampled soot, they observed the impact of rapid laser heating on soot as TEM micrographs revealed extensive graphitization of the soot as a result of thermal annealing. The enhanced graphitic properties of the soot also induced an enhancement of the LII signal detected after laser heating. The changes were observed at energies above 0.1 J/cm 2 at 1064 nm, but it was also noted that significant mass loss occurred at energies above 0.45 J/cm 2 . In Cenker and Roberts (2017) similar double-pulse experiments were performed on less mature soot in a Santoro burner flame and observations showed that pre-heating with higher fluences than 0.1 J/cm 2 resulted in more rapid LII decay rates indicating significant sublimation, while fluences above 0.15 J/cm 2 showed an enhanced LII signal due to the influence of thermal annealing. Possible differences in optical and physicochemical properties of the investigated soot may explain differences in sublimation and annealing thresholds. Recently Apicella et al. (2019) investigated how rapid laser heating alters soot generated using different fuels (ethylene and methane) and specifically soot of different maturity. It was shown that the nanostructure of the soot was extensively altered, but differently for young and mature soot, as the young soot showed an onion-like structure with voids, while the mature soot showed a collapsed multifaceted rosette structure. Further investigations were performed by Migliorini et al. (2020), who used broad-band extinction measurements of laser-heated soot in combination with Raman spectroscopy and scanning mobility particle sizing (SMPS). Soot from either ethylene or methane combustion were investigated and dispersion coefficients of n ¼ 1.02 and 2.22 were estimated, respectively. The structural changes due to laser heating of the two types of soot showed differences, of which the ethylene soot specifically showed indications of decreasing maturity from extinction measurements (increasing dispersion coefficient) and from Raman spectroscopy (decreasing I(D)/I(G)). The size distribution measurements showed a smaller size mode appearing with increasing laser heating. Similar work was done by Michelsen et al. (2007), who observed newly formed particles which appeared to have less structural order. Hence, it might be suggested that a possible explanation for the increasing dispersion coefficient in Migliorini et al. (2020) is due to newly formed particles which may have optical properties similar to young soot both due to its bulk properties (Michelsen et al. 2007) and small size (Wan, Shi, and Wang 2020). The objective of this work is to further explore how rapid laser heating influences characteristic properties of well characterized soot particles of different maturity by investigating the changes in laser-soot interaction after laser pre-heating of the soot. We specifically employ a 2-pulse (2P) LII setup as done in Cenker andRoberts (2017) andVander Wal, Ticich, andStephens (1998) to obtain information about the pre-heating influence on the soot, by comparing the LII signal from the soot with (2P) and without (1P) pre-heating. The 3 types of soot in this work, denoted OP1, OP6 and OP7, have previously been extensively investigated (T€ or€ ok et al. 2018;Le et al. 2019;Malmborg et al. 2019;T€ or€ ok, Mannazhi, and Bengtsson 2021) using optical as well as other aerosol measurement techniques to investigate the optical and physicochemical properties. Previous work has shown that OP1 soot is of mature black carbon (BC) character, with a dispersion coefficient of $1.2 (k ¼ 405 À 1064 nm (T€ or€ ok et al. 2018) and k ¼ 532 & 1064 (T€ or€ ok, Mannazhi, and Bengtsson 2021)), with E(m,1064 nm) ¼ 0.33 (T€ or€ ok, Mannazhi, and Bengtsson 2021), and with an organic-to total carbon ratio (OC/TC) of 9% (T€ or€ ok et al. 2018). Both OP6 and OP7 soot is of young, partly matured BrC-like character as they absorb less efficiently and have been estimated to have a dispersion coefficient of 2.5 and 3.5 (k ¼ 405 À 1064 nm (T€ or€ ok et al. 2018)), and an OC/TC of 59% and 87%, respectively (T€ or€ ok et al. 2018). Further it was shown that heating the soot in a thermodenuder and an oven prior to diagnostics resulted in a lower dispersion coefficient for both OP6 and OP7 soot (2.3 and 2.5), however still significantly higher than that of OP1, suggesting that the refractory soot itself is of less mature character (T€ or€ ok et al. 2018). An important part of this work is the use of LII fluence curve analysis, which was performed for the 3 types of mini-CAST soot of different maturity as the soot was pre-heated with a laser pulse of various amount of energy prior to the LII measurement. Special focus was directed on the OP1 and OP6 soot where the temporal LII signal is long enough to perform full analysis. Observations showed that the preheating influences the absorption properties of the soot as both less energy is needed to reach a certain LII signal, and as higher peak LII signals are reached for a certain pulse energy. Our results are in good agreement with previous work, suggesting that rapid laser heating will alter the soot by thermal annealing and mass loss by vaporization/sublimation. The effects of these processes was found to vary largely with the soot maturity. Additionally, for the less mature soot, an enhanced laser-induced fluorescence (LIF) signal contribution was observed during LII measurements using excitation at 532 nm and we hypothesize that this is related to the vaporized species, which may fluoresce when excited with 532 nm radiation. Experimental setup An overview of the LII-setup is shown in Figure 1. The soot source for producing the soot aerosol is a mini-CAST soot generator model 5201 C (Jing 2009). The mini-CAST operates on a propane-air co-flow diffusion flame. By adjusting the fuel/air ratio and by mixing nitrogen into the fuel flow, the conditions of soot formation in the flame changes and different types of soot can be emitted when quenched by a steady nitrogen flow at a fixed height. The three types of polydisperse soot examined in this work have previously been investigated, showing large differences in maturity (T€ or€ ok et al. 2018;Le et al. 2019;Malmborg et al. 2019). The OP1 soot is mature, while OP6 soot and OP7 soot is of decreasing maturity and will be referred to as young soot. The soot concentration in the probe volume spanned about 2 orders of magnitude between the mature and young soot ($200 ppb and a few ppb, respectively). The pulsed laser system consisted of two Nd:YAG lasers (Quantel Brilliant B) operated at 1064 and 532 nm, respectively, which are common wavelengths used for LII measurements. A digital pulse generator was used to organize the timing of the two pulses, of which the first one performs the pre-heating and the second one is used for the LII measurement. The delay between the pulses was set to 10 ls to enable the pre-heated soot to reach ambient temperature before LII measurements using the second pulse. It should be noted that similar delays were used by both Vander Wal and Choi (1999) and Cenker and Roberts (2017), nevertheless, this choice of delay time was validated experimentally. For the pre-heating laser, the 4 mm center of the beam was cut out, reshaped and relay imaged in the probe volume at a size of 5 mm in diameter. The measurement laser beam was passed through an attenuation system of two thin film polarizers at the Brewster angle with a half-wave plate inbetween. Similar to the pre-heating beam, the center 4 mm was cut out, reshaped and relay imaged into the probe volume, at a size of approximately 2 mm in diameter, resulting in a probe volume of $70 mm 2 . The laser beams which both had a top-hat spatial profile were overlapped using a small angle in between, and as the LII laser beam was positioned clearly inside the pre-heating beam, a homogeneous heating of the soot in the probe volume was assured. For signal detection, three photomultiplier tubes (PMTs) were used within their range for linear response (Mansmann, Dreier, and Schulz 2017). Two were used for LII detection at 575 nm and 684 nm, both of type Hamamatsu H10721-20. The third PMT measured the elastic scattering from the soot particles during LII detection when the laser wavelength of 532 nm was used. This PMT was also from Hamamatsu but of type H6780-04. A 4-channel digital oscilloscope (LeCroy Waverunner 104MXi) with a sampling-rate of 1 GHz was used for signal collection. A beam profiler camera (Wincam D beam profiler) was used for beam profile monitoring during alignment and a Labsphere IES 1000 was positioned in front of the detection system for pyrometry calibration. Methodology The energy balance of a soot particle during heating by a laser pulse can be described by Equation (1). The temperature evolution of the soot is coupled to the change of the internal energy U int , which depends on the heat exchange rates (denoted as _ q) of the different physical processes involved during and after the laser pulse. The absorption rate ( _ q absorption Þ is dictated by the input energy and the complex refractive index m of the soot material. The sublimation expressed through the rate _ q sublimation does not come into play until temperatures of around 3400 K, above which it becomes increasingly important with increase in soot temperature (Liu et al. 2006;Olofsson et al. 2015). Conduction (expressed through _ q conduction ) is the dominating heat loss mechanism in most situations and is governed by the temperature difference between the soot and the surroundings. The radiative emission rate ( _ q radiation ) depends on the complex refractive index of the soot but contributes much less to the overall energy balance in ambient condition (Michelsen et al. 2015). Finally, the rate _ q other deals with other processes such as thermionic emission and annealing, which are often considered to influence the change in internal energy marginally. Annealing is, however, known to induce changes in the soot material and may influence its optical and physicochemical properties (Vander Wal, Ticich, and Stephens 1998;Vander Wal and Choi 1999;Michelsen et al. 2015;Apicella et al. 2019;Migliorini et al. 2020). As described in (Sipkens and Daun 2017), the peak LII signal as a function of input energy, can be an effective tool for investigating the laser-soot interaction when performing LII measurements. In Figure 2a, an example of time-resolved experimental LII signals from laser heated soot is shown as the black lines. The peak LII intensity can be observed as a function of the input energy, namely the fluence curve which is shown as the red curve connecting the peaks. In the low fluence regime where q absorption is the dominant process, the temperature increase approximately follows a linear trend (De Iuliis et al. 2006;Sipkens and Daun 2017), which will result in a rapid LII signal increase due to its strong temperature dependence LII / T 5 . As soot reaches temperatures above the sublimation threshold, which approximately coincides with the inflexion point of the fluence curve, sublimation will successively be the dominating loss mechanism and hinder the linear temperature increase. As In (a) an example of experimental time-resolved LII signals as a function of laser fluence is shown. The red curve which connects the peak LII signal as a function of laser fluence, is termed fluence curve. In (b) a simulation of the influence of primary particle size and absorption properties on the fluence curve is illustrated. A top-hat laser profile, with an LII laser wavelength of 1064 nm and detection at 575 nm was used for the modeling case. Values of E(m) and d pp is specified in the legend. can be seen in Figure 2a, the curve will level off as the increase in absorbed energy primarily goes into the sublimation process. As soot is heated to high temperatures with a short laser pulse, the properties of the soot particles may, as previously discussed, change. As shown in Apicella et al. (2019), Cenker and Roberts (2017), Migliorini et al. (2020), Vander Wal and Choi (1999), Vander Wal, Ticich, and Stephens (1998, restructuring of the soot material due to thermal annealing will influence the optical properties of the material, along with mass loss by sublimation. In Figure 2b, numerical simulations using the LII model is used to illustrate differences in fluence curve shape that may occur as a result of rapid laser heating. The increase in absorption efficiency is described by a change of the absorption function E(m) from 0.33 to 0.46, while the mass loss is described by a decrease of the primary particle size d pp from 22 nm to 20.8 nm. As can be seen, a change of E(m) results in a shift of the fluence curve toward lower fluences, while the decrease in d pp will result in a lower LII signal. Results and discussion The double-pulse LII measurements were done in three sets with the wavelength combinations of k preheat À k LII ¼ 532-532, 1064-532, and 532-1064 nm, where k pre-heat is the wavelength of the first laser pulse responsible for the pre-heating and k LII is the wavelength of the second laser pulse, inducing the incandescence. The measurement cases with pre-heating , and OP7 soot using detection at 575 nm. In (a)-(c), the pre-heating is done using 532 nm laser radiation and LII measurements are done using 1064 nm laser radiation (532-1064), ensuring interference-free LII. For (d)-(f) the pre-heat and LII combination is 532-532, and for (g)-(i) it is 1064-532. For these measurements LIF interference is expected for OP6 and OP7 soot, which consists of a significant amount of organic carbon. Some fluence curve changes induced by the pre-heating are highlighted with arrows, and are further discussed in the text. will be denoted 2P (2-pulse) and the cases without pre-heating will be denoted 1P (1-pulse). Throughout the results and discussion, focus will be directed toward the comparison of mature OP1 soot and young OP6 soot, as OP7 soot gave signals with rather low signal-to-noise ratios in some measurement series. Influence of pre-heating on fluence curves All the fluence curves for the three wavelength combinations as well as for the OP1, OP6, and OP7 soot, with and without pre-heating are presented in Figure 3, where the curves are normalized to the signal from non-heated soot with the detection wavelength of 575 nm. As indicated in the figure, the pre-heating appears to influence the fluence curves overall by I) inducing a change in the peak LII intensity, II) changing the amount of energy needed to heat the soot to peak LII, and III) for some pre-heated cases with LII using 532 nm excitation, extensive signal enhancement can be observed at low fluences. This signal enhancement is a laser-induced fluorescence interference, which will be further discussed in Section 3.5. Figures 3a-c show the influence of a k pre-heat ¼ 532 nm on the LII signal induced using k LII ¼ 1064 nm. By increasing the pre-heat pulse fluence, it can be seen that at $0.06 J/cm 2 , the first shift of the fluence curve toward lower fluences can be observed for OP1 soot, still with the same LII peak intensity. This shift indicates increased absorption efficiency (as discussed in Section 3 in relation to Figure 2b) and is assumed to be related to thermal annealing of the soot. . The influence of pre-heating is shown on the peak LII signal and on the relative absorption efficiency in relation to the pre-heating energy. In (a) the relative peak intensity is shown for all studied soot when performing LII measurements using 1064 nm. The trend observed at the delayed LII signal (at 50 ns after peak LII) is also shown as gray lines for comparison. In (b) the relative curve position is shown for the same cases as in (a). In (c) and (d) the relative curve position for OP1 and OP6, respectively, can be observed for all experimental cases. Linear fitting is done for the pre-heated soot using 532 nm, to investigate the pre-heating influence on the dispersion coefficient. The dispersion coefficient as a function of the pre-heating energy is shown as the blue line, following the right blue y-axis. All pre-heating energies using 1064 nm are multiplied by the factor (1/2) n for the unheated mini-CAST soot. This annealing becomes more efficient with increased pre-heat pulse energy, thereby shifting the fluence curve more with increasing pre-heat laser fluence. Another effect which can be observed in Figure 3a for a pre-heat laser fluence of $0.097 J/cm 2 and above, is a decrease of the peak LII signal. The lower peak LII signal can be related to mass loss of the particle through sublimation/vaporization as a result of the pre-heating. Hence, for this mature OP1 soot, the absorption efficiency due to annealing increases at much lower temperatures than the sublimation threshold. These observations of a decreasing LII peak with increasing pre-heating has also been observed by Cenker andRoberts (2017) andVander Wal, Ticich, andStephens (1998). The analysis becomes less straightforward for OP6 and OP7 soot, which consist of both less mature soot and higher fraction of organic carbon which may partly evaporate at elevated temperatures (T€ or€ ok et al. 2018). For both OP6 and OP7 soot, the shift of the fluence curve becomes even more evident as shown in Figures 3b and c. Contrary to the OP1 case, the OP6 and OP7 soot show a strong peak signal enhancement for increasing pre-heat laser fluences which shows that the enhancement of LII signal due to annealing is the dominant process in comparison with the influence of sublimation. These observations will be further discussed in relation to Figure 4, where extended analysis of the fluence curves in Figure 3 is presented. When comparing all of the fluence curves of OP1 soot, in Figures 3a, d, and g, where the heating and LII wavelengths are different combinations of 532 and 1064 nm, it can be seen that the changes induced by the pre-heating will give roughly the same resulting fluence curve trends. This wavelength independent behavior can be related to its mature character, exposing no significant laser-induced fluorescence (LIF) contribution when performing LII with k LII ¼ 532 nm (Figures 3d and g). In contrast to the mature OP1 type of soot, the OP6 and OP7 soot is as discussed previously, less mature and consists of a relatively larger mass fraction of organic carbon which is partly refractory (T€ or€ ok et al. 2018). As can be observed in Figures 3e and f where the prompt LII fluence curves are shown for OP6 and OP7 soot, respectively when performing LII with k LII ¼ 532 nm, some differences can be observed in relation to Figures 3b and c. The appearance of the curves looks similar but it can be observed that for OP6 soot, a distinct increase in LII signal can be seen at low fluences (as pointed out by the red arrow in Figure 3e). Fluorescence is most likely a substantial part of the detected signal, and deeper analysis of this LIF contribution will be performed in Section 4.4. In Figures 4a and b the change in relative peak intensity and the change in absorption efficiency are shown for various preheating energies using k LII ¼ 1064 nm, based on the data presented in Figures 3a-c. The change in relative peak intensity is the ratio between the LII peak intensities, and the change in absorption efficiency is given from the relative shift of the fluence curve, specifically 1/shift as a shift to lower fluences can be converted to a higher absorption efficiency. The black lines represent prompt LII detection, while gray lines display the delayed LII detection at 50 ns after peak LII to assure detection of pure LII and avoidance of potential fluorescence interference. In Figure 4a, the change in peak LII signal intensity is shown. For OP1 soot the peak LII signal is rather constant until a pre-heat fluence of $0.07 J/cm 2 . For increasing fluences above 0.07 J/cm 2 , the peak LII signal decreases to peak LII signals below the original level of the unheated soot, suggesting extensive mass loss by sublimation, as discussed in relation to Figure 3a. For the OP6 soot, the peak LII intensity increases extensively by up to almost 40% above the unheated signal at 0.12 J/cm 2 pre-heating, suggesting that annealing and potentially a higher peak LII temperature are the dominant mechanisms. The same applies for OP7 soot, however to a much larger extent as the peak LII is 80% higher than the non-pre-heated soot at 0.12 J/cm 2 pre-heating. Hence, it appears that the thermal annealing process induces larger changes in the peak LII signal for the younger soot, prior to sublimation. In Figure 4b, the influence of pre-heating on the fluence curve position as a result of changed absorption properties is shown, observed as the shift to lower fluences (as indicated by the horizontal arrow in Figure 3b). The lower energy needed to reach a certain LII signal when the soot is pre-heated is related to the enhanced absorption efficiency as discussed earlier in relation to Figure 2b, hence Figure 4b shows the change in absorption efficiency. It can be seen that for the mature OP1 soot, the absorption efficiency increases for fluences higher than $0.05 J/cm 2 . For OP6 soot the fluence curve is shifted at lower fluences, while OP7 soot needs approximately the same amount of energy to induce any fluence curve shift as for OP1 soot. The reason for the non-consistent trend between the energy needed to induce a fluence curve shift in relation to the soot maturity may be due to a combination of 1) the absorption efficiency related to the soot maturity, 2) the extent of thermal annealing and its influence on the absorption efficiency and, 3) the potential influence of evaporated organic carbon and other volatile species of which an extensive amount is evaporated at relatively low temperatures (800 K (T€ or€ ok et al. 2018)). The mature and the young soot show quite different behaviors from the results in Figures 4a and b. Soot absorption properties of OP1 soot starts to change at $0.05 J/cm 2 (at 532 nm), while sublimation becomes significant at about $0.07 J/cm 2 . For OP6 soot, the soot absorption properties change already at 0.04 J/cm 2 , and there are indications of sublimation at around 0.1 J/cm 2 . For OP7 soot, the absorption property changes at $0.06 J/cm 2 , and for this case potential sublimation is masked by the increased annealing. Influence on soot maturity The dispersion coefficient n describes the wavelength dependence of the absorption cross section (and emissivity) of soot, see Equation (2), where E(m,k) is the absorption function which describes how efficiently soot absorbs electromagnetic radiation of a certain wavelength k. Often the dispersion coefficient is determined in the visible and near infrared region using two or multiple wavelengths. In T€ or€ ok, Mannazhi and Bengtsson (2021), two-wavelength LII was used to measure n, which was estimated to be 1.17, 1.7 and 2.3 for OP1, OP6, and OP7 soot, respectively, showing a strong relationship between dispersion coefficient and maturity. In Figure 4c, the change of the absorption efficiency for OP1 soot is shown as a function of pre-heating fluence for all experimental conditions, with focus on the cases k pre-heat -k LII ¼ 532-532 and 532-1064 nm, where specifically the pre-heating conditions are the same. The interesting observation is that the slopes of the curves are different when comparing LII using 532 and 1064 nm to probe the preheated soot with varying fluence. By fitting a linear function to the experimental trends, shown as orange curves, the influence on the dispersion coefficient could be obtained from the ratio of the curves. The resulting trend for the dispersion coefficient n as a function of pre-heating is shown as the blue curve, following the right y-axis. The same procedure was done for OP6 soot, as can be seen in Figure 4d. According to the change of n with preheating, a decrease from 1.17 to just about 1 was observed for OP1 soot and a decrease from 1.66 to just about 1.4 was observed for OP6 soot, when preheating with 0.13 and 0.12 J/cm 2 , respectively. This shows how soot heating using a short laser pulse can change the optical properties of soot irreversibly, which has also been observed in Cenker andRoberts (2017) andVander Wal, Ticich andStephens (1998). The influence of thermal annealing by pre-heating the soot may also be quantified by observing how the fluence curve position changes with pre-heating Figure 6. The fluence curves of 532-532 and 532-1064, with (2P) and without (1P) pre-heating are shown, where the fluence curve axis for the 532 nm case has been normalized to the 1064 nm case using the estimated i so that both excitation curves overlap. Also, all curves are normalized to the time-resolved LII signal at 50 ns after peak LII (and re-scaled to 1 for peak LII at 1064 nm) in order to obtain the difference in the fluence curve using 1064 nm (only LII; black and gray curve) and the fluence curve using 532 nm (LII þ LIF; red curves). fluence. As a LII signal is observed at successively lower fluences the higher pre-heating laser fluence is used, it implies an increase in the estimated E(m,k). For OP1 and OP6 soot, the E(m,1064 nm) would change from 0.33 and 0.16 (T€ or€ ok, Mannazhi, and Bengtsson 2021) to approximately (not taking sublimation and change of thermal properties into account) 0.46 and 0.23, respectively when pre-heated using 532 nm radiation of 0.13 and 0.12 J/cm 2 . Laser elastic scattering for mass loss estimation The elastic light scattering (ELS) during laser-soot interaction was recorded for all LII measurements done using 532 nm as done in Witze et al. (2001). As soot is heated to high temperatures, sublimation will occur and mass loss will influence the appearance of the scattering signal. At higher fluences, soot may reach sublimation temperatures before peak laser intensity, and hence both peak LII and ELS will appear earlier in time. Hence, peak ELS intensity will then depend on both the sublimation process and the extent of mass loss and cannot be directly used for estimation of the mass loss due to the pre-heating laser pulse. As ELS is obtained from soot without preheating (1P) as well as with pre-heating(2P), the difference in signal may be utilized to obtain information about the influence of sublimation from the pre-heating pulse. By fitting the scattering signal obtained at a low fluence where sublimation is negligible, to the leading edge of all ELS signals (20%-50% of max ELS signal), which is a common method for estimations of coating mass using continuous-wave LII (CW-LII) (Gao et al. 2007), the ELS signal as a function of laser fluence can be predicted as if no sublimation would occur. The difference compared to the measurements in Gao et al. (2007) is that the soot is exposed to a gaussian beam of nanosecond length, and not of microseconds. In Figure 5a, a sequence of time-resolved scattering signals are shown (in orange) with a fitted signal (in blue), which should represent the scattering signal independent of sublimation. In Figure 5b, the fitted peak ELS signal is shown as a function of fluence for an OP6 case and as black markers (1P triangles and 2P circles). The signal trends hence show the peak ELS signal as it would appear if no sublimation would occur. As the scattering signal scales according to ELS / d 6 pp , the approximate relative mass m after sublimation in percentage of the original mass can be estimated as the square-root of the ELS-ratio. In Figure 5c, the ELS trends for OP1 and OP6 soot is shown as a function of the pre-heating fluence. For OP1 soot, mass loss becomes significant at $0.18 J/cm 2 (which corresponds to 0.08 J/cm 2 at 532 nm) and is in relatively good agreement with the observed trends in the fluence curve analysis in Section 4.1. It can be seen that when pre-heated with 0.35 J/cm 2 at 1064 nm ($0.16 J/cm 2 at 532 nm assuming n ¼ .17), the mass after extensive pre-heating reached approximately 80% of the original mass. For OP6 soot, the scattering signal appears to decrease at even lower fluences than OP1 and lead to more efficient mass loss. In Cenker and Roberts (2017) a loss down to 45% of its original mass was observed when pre-heated with 0.33 J/cm 2 for less mature soot in a Santoro burner flame, and this data has also been included in Figure 5c. It is plausible that part of this signal loss is related to the evaporation of volatile hydrocarbons at temperatures below 1000 K. In addition to uncertainties in thermal annealing and optical properties, it should further be noted that elastic light scattering (ELS) is a complex process for fractal aggregates like soot and that the present analysis is based on a simple mass loss model. An extensive investigation of the scattering properties of OP1, OP6, and OP7 soot has been presented in Karlsson et al. (2022). Finally, by utilizing the LII model and by assuming that the E(m,1064) changes from 0.33 to 0.46 due to pre-heating, we may estimate the possible mass loss, by assuming that the evaporated species will not contribute to the LII signal. As the pre-heating of 0.13 J/cm 2 at 532 nm would correspond to 0.3 J/cm 2 at 1064 nm (by assuming n ¼ 1.17), LII modeling (shown in Figure 2b) shows that the mass would be approximately 85% of the original mass. The result is included in Figure 6 (pink circle), and as can be seen, the result agrees well with the results based on the ELS measurements of OP1 soot. LIF interference The LIF contribution to the LII signal can be overcome by considering the time-resolved LII signal at a time when the LIF contribution is considered negligible (as done in the n analysis) (Therssen et al. 2007;Cl eon et al. 2011;Musikhin et al. 2019). Here however, the contribution of LIF will be estimated, with and without pre-heating prior to the LII measurement. The contribution of LIF at the time of peak LII is estimated by overlapping the time-resolved LII signal at 50 ns delay after peak LII, as done in T€ or€ ok, Mannazhi, and Bengtsson (2021). In Figure 6, these curves are shown, with (2P) and without (1P) preheating using the laser wavelength combinations of 532-532 and 532-1064 with low and high fluence preheating of OP1 and OP6 soot. As the 532-1064 combination shows the sole LII signal the curves (in black and gray) are normalized. The 532-532 shows the LII þ LIF signal (in red) and hence, the difference between the curves (black/gray and red) can be considered as the LIF contribution. For OP1, it can be seen that regardless of the amount of pre-heating, the fluence curves obtained with 532 and 1064 nm do overlap well for both the 2P and 1P case, indicating that there is no significant LIF contribution, either with or without pre-heating. For OP6 soot it can be seen that there is a considerable contribution from LIF, but the contribution does not change when pre-heating with 0.03 J/cm 2 . Using 0.12 J/cm 2 pre-heating however, there is a change induced, showing an enhanced LIF contribution to the signal. The origin of this fluorescence signal is uncertain but may be related to volatiles released by the pre-heating pulse, which are then probed by the LII laser pulse. Another interesting observation is that the fluorescence is much stronger at 575 nm in comparison with 684 nm. This could indicate that the LIF signal originates from polycyclic aromatic hydrocarbons, which in sooting flames have a spectrum in the visible region after excitation at 532 nm. Also, this type of species has been detected in OP6 and OP7 soot after laser ablation in an SP-AMS (Malmborg et al. 2019). The observations here can be discussed in relation to the changes in the fluence curves in Figures 3e and f, where an enhanced signal contribution is observed at low fluences, due to an enhanced LIF signal contribution as a result of substantial pre-heating of the young soot. It should however be noted that the difference between the curves does not represent the maximum LIF signal contribution in general, as the peaks of the LII and LIF signals do not necessarily overlap for all fluences. The enhanced LIF signal contribution may appear unexpected, as thermal annealing will induce structural changes of the heated soot, which thereby obtain properties of more mature soot. But, as observed by Michelsen et al. (2007) and Migliorini et al. (2020), evaporated species and fragments from laser-heated soot particles may nucleate and form new particles in the size range of $10 nm. In Michelsen et al. (2007) a new mode of particles was formed when soot was laser-heated at fluences higher than 0.12 J/cm 2 at 532 nm. From transmission electron micrographs, it was observed that the small particles were at least partly of very low structural order. Also, in Migliorini et al. (2020), laser-heated soot from an ethylene and a methane flame was investigated, showing characteristics of less mature soot with increasing laser heating. As the estimation of n was performed from extinction measurements, small particles and fluorescing species may have a non-negligible influence on n. Hence, we suggest that one possible explanation of the enhanced LIF contribution in our results may be related to the evaporation of carbon fragments which may fluoresce when exposed to laser radiation of 532 nm. In this work, it is however not investigated whether these species nucleate into small particles of low structural order or remain in gas-phase. For OP1 soot, there was no indication of any significant LIF signal, which is not surprising as the organic part of this soot is very low (T€ or€ ok et al. 2018). Additionally, it was shown in Malmborg et al. (2019) that laser heating of OP1 soot preferably led to small carbon fragments such as C, C 2 and C 3 , which have no absorption and fluorescence characteristics in the visible spectral region. Applicability The present work can give guidance in the choice of optimal excitation wavelength using one pulse LII in a sooting environment with large variation in soot maturity. As young soot has a strong wavelength dependence and absorb more efficiently at shorter wavelengths, it may be preferred to choose a wavelength as short as possible but without inducing any fluorescence. It is true though that the influence of mature soot on the integrated LII signal in a sooting probe volume will be dominating. This is the case both due to its efficient absorption properties and its emissivity which both are directly proportional to E(m,k), and as mature soot mass is often dominant in many combustion processes. Nevertheless, it is important to consider the influence of the choice of LII laser wavelength for efficient detection. Hence a laser wavelength around 700 nm for LII measurements would avoid creating significant fluorescence and simultaneously minimize the difference in absorption efficiency between different types of soot. Double-pulse LII measurements has in the present investigation been used for a fundamental investigation of soot characteristics. However, based on the present results it could potentially be used as a soot diagnostic. As such the double-pulse method could be used to enhance the detectability of young poorly absorbing soot in sooty environments where not much mature soot is present, as the first pulse anneals the soot and increases its absorption efficiency thereby increasing the sensitivity of the detection using the second pulse. Depending on the type of LII study, it may however be important to consider the influence of pre-heating not just on the enhanced absorption, but also on the induced changes to the LIF signal and the thermal properties of the soot. We further speculate on the potential of detecting very young soot by double pulse LII, in order to get closer to the point of where soot is formed and one may differentiate between refractory soot and precursors such as PAHs. Summary and conclusions The response of mature and young soot to rapid (nanosecond) laser heating was investigated using a double-pulse LII setup. By pre-heating the different types of soot using the first laser pulse, LII measurements were done using either 532 or 1064 nm radiation. The use of two different wavelengths allowed for the estimation of the wavelength dependence of the processes occurring due to pre-heating. The novel findings presented in this work are listed below. 1. Pre-heating the soot altered the mature and young soot in different ways. While the mature soot exhibited an absorption enhancement, suggesting thermal annealing, mass loss was also prominent, influencing the peak LII intensity. For the young soot, modest pre-heating resulted in extensive enhancement of both the absorption efficiency of the soot and the peak LII intensity. 2. For OP1 and OP6 soot, the influence of pre-heating on the dispersion coefficient was estimated. A decrease was observed for both OP1 and OP6 as their values decreased from 1.17 to 1.03 and 1.67 to 1.42, respectively when heated with 0.13 and 0.12 J/cm 2 at 532 nm. This change in absorption dependence is related to a more mature character of the soot after annealing. 3. From elastic laser scattering measurements, leading edge fitting was performed to estimate the mass loss due to pre-heating. The method agreed well with mass loss estimations from the LII model. 4. From analysis of the fluence curves based on the time-resolved signal at 50 ns after peak LII, the contribution of LIF at 532 nm could be estimated for OP1 and OP6 soot. Pre-heating the soot, induced an enhancement of the LIF contribution for the OP6 soot. We suggest, in alignment with other works, that vaporized carbon species from the soot particles are fluorescing when exposed to laser radiation of 532 nm. It is however not investigated whether this signal originates from gaseous species or newly nucleated particles. Further, we can suggest the use of the double pulse technique as a method for increasing the detectability of young soot which absorbs poorly at the most common LII wavelength of 1064 nm. Also, the present study can give guidance in order to choose the optimum laser wavelength for LII for efficient soot detection related to the maturity of the soot.
10,056.2
2022-04-05T00:00:00.000
[ "Materials Science", "Physics" ]
“A process of controlled serendipity”: An exploratory study of historians' and digital historians' experiences of serendipity in digital environments We investigate historians' experiences with serendipity in both physical and digital environments through an online survey. Through a combination of qualitative and quantitative data analyses, our preliminary findings show that many digital historians select a specific digital environment because of the expectation that it may elicit a serendipitous experience. Historians also create heuristic methods of using digital tools to integrate elements of serendipity into their research practice. Four features of digital environments were identified by participants as supporting serendipity: exploration, highlighted triggers, allowed for keyword searching and connected them to other people. INTRODUCTION A digital environment is a platform or tool used to access and manipulate information, such as digital libraries, databases, social media and journals. However, not all disciplines have embraced these digital environments to the same extent and, even within a single discipline, scholars have made use of digital tools to different degrees. This paper takes historians as its focus, including the subsection of historians that selfidentify as digital historians. Historians have become increasingly digital over the past decade, using and designing different tools to aid their own research (Fyfe, 2015;Leary, 2015). Often the designation digital historian is used to describe those history scholars who integrate various digital sources and tools into their work practice. While distinctions between historians and digital historians have been questioned, the label of digital historian is used in the context of this paper to describe those historians that self-define as digital in the context of our survey. To date, information scholars have tended to focus on humanities scholars as a group without paying much attention to the unique information needs and scholarly practices of historians (some exceptions include I. Anderson, 2010;W. M. Duff & Johnson, 2002;Tibbo, 2003). Historians, however, have attributes that stand out from other humanities scholars, including extensive use of the library and archives (Case, 1991;Delgadillo & Lynch, 1999), the importance of primary sources to their research (Rutner & Schonfeld, 2012) and the common experience of serendipity while researching (Anderson, 2010;Duff & Johnson, 2002;Martin & Quan-Haase, 2013). It is important to study digital historians to understand how the use of digital sources and tools is influencing the unique attributes of historical research. The present paper examines historians' perceptions of how digital environments have affected their experiences of serendipity. Much research has looked at the role of serendipity in historical scholarship. Anderson (2010) lists serendipity as an information-seeking method used by historians in his examination of their work with primary resources. Kirsch and Rohan (2008) in the introduction to their collection Beyond the Archives argue that their work teaches historians to attend to the facets of their research that "seem merely intuitive, coincidental, or serendipitous" (p. 4) in order to identify areas of scholarly research. Fyfe (2015) sees the recognition of a serendipitous connection as a skill in which historians can be, and should be, trained. Despite the attention that serendipity has received in the literature on historians' scholarly practices, little is known about what specific environments are perceived as most conducive for serendipity and few attempts have been made to isolate the effect of specific features for serendipitous experiences. The present paper investigates the following two research questions: • What digital environments are historians using to encourage serendipity in their research? • Which features of digital environments do historians see as supporting serendipity? LITERATURE REVIEW: SERENDIPITY IN THE DIGITAL ENVIRONMENT Several recent studies investigate the role of serendipity in the digital environment, and lay the groundwork for our own examination of this experience by historians. In an attempt to trigger a serendipitous encounter in a digital environment Toms and McCay-Peet (2009) set up an observational laboratory study that saw 96 participants complete three tasks using a Wikipedia-based tool developed for the study, called "Suggested Pages". Forty percent of their participants used the tool, reporting that the links they found through "Suggested Pages" were relevant to their assigned tasks, and were surprising, but some also deemed them as a distraction from the task at hand. did not replicate typical behaviour, and that there was much left to understand about how to trigger in a digital environment a serendipitous encounter with information. Race (2012) examined the serendipitous features associated with web-scale, user-friendly discovery tools such as World-Cat and EBSCO. She noted the importance of personalizing the search process, and demonstrated that interactivity between the user and the computer system could help users better realize interconnections. The main strength in Race's article lies in her summary of web-scale discovery tools that support serendipity. Here Race managed to break down the various tenets of serendipity (browsability, hypertext links, visualization of results, etc.) and determine whether each of the aforementioned tools supports these features or not. McCay-Peet, Toms, and Kelloway (2014) conducted a series of studies with the aim of developing robust measures of serendipity that were specifically geared to the unique context of digital environments. They identified five features of a serendipitous digital environment or SDE: • Trigger-rich: The digital environment is filled with a variety of information, ideas or resources interesting and useful to the user. • Enables connections: The digital environment exposes users to combinations of information, ideas or resources that make relationships between topics apparent. • Highlights triggers: The digital environment actively points to or alerts users to interesting and useful information, ideas or resources using visual, auditory or tactile cues. • Enables exploration: The digital environment supports the unimpeded examination of its information, ideas or resources. • Leads to the unexpected: The digital environment provides fertile ground for unanticipated or surprising interactions with information, ideas or resources. Other studies of serendipity in digital environments focus on how best to capture these experiences, which are most often collected in the form of self-reports (Makri et al., 2015). Makri et al. (2014) interviewed 14 creative professionals about their personal strategies for influencing serendipity, and then discussed the various ways in which digital environments support these personal strategies. For example, a creative professional mentioned "varying their routines" as a personal strategy. Makri et al. (2014) suggested that designers of digital environments could support serendipity by recommending material tangentially related to the users' work, or by encouraging users who have similar interests to share links to web sites. For the authors digital environments that support these personal serendipity strategies would be more beneficial to both creative professionals and general users because they support elements of serendipity rather than attempting to offer "serendipity on a plate" (Makri et al., 2014(Makri et al., , p. 2181). The literature review shows various approaches in which digital environments can be designed to promote serendipity. The literature so far has not focused on historians and how digital environments may be designed to aid in their scholarly work. As serendipity is central to their practice, designing digital environments with their information needs in mind could help support their work. METHODS The survey was developed by building on previous findings based on interviews with historians about their scholarly practice (Martin, 2016;Martin & Quan-Haase, 2013, 2016. The online survey was chosen as a method to reach a diverse set of historians, after attempts to recruit members of this population for interviews proved challenging. Sample A total of 142 participants started the survey, of which 90 participants provided answers to all questions (N=90). We did not require that participants answer all questions, as only those who could recall a specific serendipitous experience were able to answer the survey in full. Also, several of our questions were open-ended, and required more time and effort than simply clicking a button, which may have influenced question non-response (Reja, Manfreda, Hlebec, & Vehovar, 2003). As the number of respondents to each question differed due to how the survey was set up in Qualtrics, we will report the number of participantsnwho provided responses to each question. Online Survey Data were collected via an online survey that took about 15 minutes to complete (Martin, 2016). There were four sections to the survey: Section A: background on participants' historical research, Section B: serendipitous experiences while conducting research, Section C: serendipitous experiences while in physical and digital environments, and Section D: demographic information. Where available, we relied on previously validated measures. McCay-Peet's (2013) scales provide a "direct measure of serendipity" in digital environments and in life in general (Q19, Q21, and Q23). These helped to establish the basis for historians' experiences with serendipity and to test to what extent the digital environments they used in their research encouraged serendipity. Open-ended questions were included to allow participants to expand on their experience. These open-ended questions help triangulate findings from the questionnaires and also expand on the numeric values by adding rich data about the experiences of scholars . To understand what role digital tools played in participants' research the following question was included: Would you describe yourself as a digital historian? (Q17) to which 48% of the participants answered "Yes" (n=87). Q19 asked respondents to list three types of digital environments in which they had experienced serendipity: "Please list up to 3 digital environments where you have experienced serendipity. Please be specific, for example, if this occurs on social media, please indicate the platform (e.g., Twitter)." As a follow up to this, respondents were also asked to describe what features of each of the three listed digital environments (in Q19) they thought were most conducive to serendipity. Specifically, Q21 stated, "Please describe the features (e.g., keyword searches, browsing options, interaction with others) of this specific digital environment that you find to be most conducive to the serendipitous encounter." We were also interested in the features they thought promoted serendipity across all digital environments. For this purpose, Q23 asked, "Please describe the features of a digital environment that you find to be most conducive to the serendipitous encounter." Online surveys have the benefits of being convenient to the participant and timesaving to the researcher (Evans & Mathur, 2005;Sax, Gilmartin, & Bryant, 2003). However, there are also downsides to online surveys, such as a lack of response from non-internet users, and privacy and security issues (Evans & Mathur, 2005). As we were particularly interested in the research habits of digital historians, the use of an online survey was justified. The survey access link was distributed via social media, listservs and emails to history departments across Canada to reach a wide and diverse audience. As Twitter was popular among many historians, we also disseminated the link to the online survey using the hashtag #twitterstorians, which is followed by historians. To reduce concerns over privacy and security, Qualtrics was employed for the collection of data. Qualtrics does not rely on cloudbased data storage, as data is stored locally on a secure university server. We collected demographic information from our participants such as age, gender and academic background, and no identifying information was collected to guarantee the anonymity of respondents. We obtained ethics approval and the survey was live from February through April 2015, during which time the primary researcher did weekly checks to ensure there were no cases of intentional misuse. Data Analysis As this paper reports on preliminary analysis, questionnaire responses were analyzed using descriptive statistics in R. For Q19 (see wording above), participants could list up to three digital environments where they had experienced serendipity. Seventy-nine participants listed a total of 194 digital environments, and these were then separated into the types of digital environment that historians had previously been asked to report their comfort with in Q18. As the participants were not asked to rate these environments, they were then coded according to the same 10 digital environments as Q18, with the addition of three categories ("Databases," "Archives" and "Ancestry websites") to account for the digital environments mentioned by participants that fell outside of the original ten. Because of the complexity of the answers to Q21 and Q23, a deductive content analysis approach was utilized. Usually this approach is recommended when "the structure of analysis is operationalized on the basis of previous knowledge and the purpose of the study is theory testing" (Elo & Kyngäs, 2007). We used the previously established categories of serendipity by McCay-Peet, et al. (2014). Their five facets of an SDE identified in the literature review above provided a starting point for the content analysis. To ensure that as many of the historians' responses as possible were included in the analysis, it was important to remain open to other categories being created if the five facets of SDEs previously identified by McCay-Peet, et al. (2014) did not account for most of their responses. In the first phase, themes or phrases were used as the unit of analysis (Berg, 2005) and each of the historians' responses to Q21 were categorized into the five facets, with many answers being divided into multiple phrases and some phrases fitting into multiple categories. There were three additional themes that emerged as prominent in the responses to Q21: "People," "Heuristic Search," and "Keyword Search." "People" and "Heuristic Search," were created as sub-categories to "Enables Connections" and "Highlights Triggers," respectively. The final coding scheme used for the analysis is shown in Table 1. After the codes were refined and finalized, Q21 and Q23 were recoded according to the same set of categories. One additional reliability coder went through about half of the data to assess the reliability. The intercoder reliability for Q21 was Cohen's Kappa = .62. According to Landis and Koch (1977), this score is at the lower end of "substantial" agreement strength. The intercoder reliability for Q23 was higher, at Kappa = .72, at the higher end of "substantial" agreement strength. This indicates that there is room for clarification of the coding scheme we employed, to avoid any room for confusion between codes in future studies. Digital historians, digital environments Respondents reported where they experienced serendipity. Figure 1 shows that serendipity was experienced more frequently in a physical library or archive than it was in digital library interfaces or while researching on the web. We compared responses from those who had identified as digital historians with those from respondents who did not identify as digital historians. We found that those who identified as digital historians experienced serendipity more frequently in digital environments than non-digital historians. Serendipity was experienced more frequently on the web than in a library interface, but this may also be due to the frequent use of web-based search engines (Kemman, Kleppe, & Scagliola, 2013). CODES DESCRIPTIONS Trigger Rich The digital environment is filled with a variety of information, ideas or resources interesting and useful to the user. Enables Connections The digital environment exposes users to combinations of information, ideas or resources that make relationships between topics apparent. Sub-code EC -People Where the connection is made as above, but involves people as either the providers of information or the link to information. Highlights Triggers The digital environment actively points to or alerts users to interesting and useful information, ideas, or resources using visual, auditory, or tactile cues. Sub-code HT -Heuristic Search Same as above but search is involved, showing an agency on behalf of the historian Enables Exploration The digital environment supports the unimpeded examination of its information, ideas or resources. Leads to the unexpected The digital environment provides fertile ground for unanticipated or surprising interactions with information, ideas or resources. Keyword Search Anytime the respondents include keyword search. Often with none, or very little, description. We then listed ten different digital environments and asked the respondents to rate their comfort level with these environments on a five-point Likert-type scale ranging from "very uncomfortable" to "very comfortable" (Q18). Figure 2 shows that respondents were comfortable with digital environments that they would come across as part of their working day, such as search engines, word processing tools, email and library interfaces. As the survey was conducted online and recruitment was partially done via Twitter, it is not surprising that the participants were also comfortable with social media. Finally, the two digital environments where the participants indicated to be the least comfortable with were "Writing Code" and "Software Development Tools," where only 16% and 8% indicated to be "somewhat comfortable" or "very comfortable." Figure 2. Respondents' comfort with digital environments The answers to the question "Please list up to three digital environments where you have experienced serendipity" (Q19) resulted in a list of 194 digital environments. The answers to Q19 can be seen in Figure 3. Social media is the digital environment most commonly named by historians as a place where they experience serendipity. While the answers to the questions regarding features of digital environments (see below) support this finding, it should be noted that we used Twitter as one method of recruitment for this study, thus many of our participants are likely to feel comfortable using social media, and to use it frequently, possibly increasing their experiences of serendipity in this digital environment. "Library Interfaces," "Databases" and "Archives" are digital environments in which the historians also reported experiencing serendipity. Figure 3. Digital environments where historians experience serendipity As we originally only included "Library Interfaces" in our list of digital environments, and later added "Databases," "Archives" and "Ancestry websites" to account for the historians' own answers about where they experience serendipity, more work is needed to explore this breakdown of digital environments and the experiences of serendipity in the digital and physical versions of each. Though the participants were largely comfortable using a variety of digital environments, including email, social media and search engines, there are some digital environments, like software tools and writing code that have not yet been integrated into the digital tools of most of these historians. The Frequency of Serendipitous Experiences Encountering useful information while using digital environments was the most frequent response amongst our participants, who also tended to experience work-related serendipity slightly more often than serendipity that impacts their everyday life (see Figure 4). Figure 4. Experiences of serendipity in digital environments (n=80) A large percentage of historians selected "sometimes" as their response to these questions. It was evident from Figure 5 that digital historians experienced serendipity more frequently in digital environments than other respondents. Again, digital historians were more likely to experience work-related serendipity when using a digital environment, than they were to experience serendipity that impacts their everyday life. To further understand our population's experiences with serendipity, we then asked them to think about their life experiences in general (Q23), not just in digital environments. As Figure 6 demonstrates, these responses were similar to the responses regarding the participants' experiences using digital environments. Figure 6. Experiences of serendipity in general However, when we broke these responses down into the "Yes" or "No" answers to Q17 (Would you describe yourself as a digital historian?) (Figure 7), the result was that both groups reported experiencing serendipity to a similar extent across the four questions. In fact, very few historians reported to "Never" experience serendipity, except for a small percentage that reported that this phenomenon had never impacted their everyday lives. Overall then, despite our population reporting similar experiences with serendipity in their lives in general (online and offline), when it came to using digital environments, those who identified as digital historians were more likely to experience serendipity when working in a digital environment. Features That Support Serendipity To begin answering RQ2, we coded the number of times each category was mentioned ( Table 2). Each of the features was mentioned in the historians' responses to both Q21 and Q23, to varying extents. "Highlights Triggers," "Enables Exploration," "People" and "Keyword Search" were all prominent categories, though all eight categories were represented by the participants' responses, showing that serendipity was an experience that could occur in many different contexts, and that digital environments require multiple features to support serendipitous information behavior. The features are discussed individually below in detail, from the most commonly identified feature ("Enables Exploration") to the least commonly identified feature ("Trigger Rich"). Figure 7. Experiences of serendipity in general for digital/non-digital historians Features of a Digital Environment that Support Serendipity Enables Exploration Of the features that supported serendipity, there were three types that historians used to explore information. First, there were those related to browsing material on the web, either using links available on blogs, websites or in citations. Google was mentioned several times, with participants indicating they use the search results to explore and browse comparable to how they would in a physical environment, as Participant 22 pointed out: "I use Google and Google books like a library interface." (P22) Second, historians also spoke about the relevance of linked open data and the semantic web to their research. Finally, historians indicated that exploring a full text primary source, particularly one that was previously unavailable to them, often resulted in finding new and relevant information. Keyword Search As outlined in the methods section, the high number of historians who mentioned keyword search in their answers to Q21 and Q23 might have been due to our decision to mention this as an option in the wording for Q21. However, many historians expanded upon the reasons they found keyword search to lead toward serendipitous results. For example, Participant 52 reported: "Keyword searches often bring up serendipitous results because they do are not confined to the usual 'silos' of archival references. They search across fonds and can bring up results from the entire archive, provided that enough is made searchable." (P52) Thus, it is not so much the keyword search feature that results in serendipity, but the ability of the algorithm to gather material from different places and to cast a wider net than historians might be able to on their own. People Social media was reported by the historians to be the digital environment where they most commonly experienced serendipity. For these scholars, comments on blog posts, Facebook conversations, and connections to their Twitter community often led to new insights. The historians largely recognized that they self-selected this community, curating their connections, and that they had interests in common with those who they followed, particularly on Twitter. For Participant 16, this was one way in which she could exert agency over her serendipitous experiences: "It's a process of controlled serendipity: I follow people I'm interested in, for example, or start on a webpage that is key to my work. From there, I go on structured explorations." (P16) We placed "People" as a sub-code under the heading of "Enables Connections" because historians spoke of people sharing information they could relate to, or having conversations with those in their field that inspired new ideas. Some of these phrases were also coded as "Highlights Triggers," but we felt it necessary to categorize the times that people were mentioned to demonstrate the prominence of social media amongst the historians' responses. Highlights Triggers For our participants, the most common way that triggers, or alerts to interesting or useful information, were presented in digital environments was as hashtags on Twitter. Typing words this way turns them into links that allow users to click on them and see a list of current posts that include the same hashtag. Our participants noted how useful it was to be able to follow relevant hashtags, particularly around a conference they were interested in ("following conference hashtags is helpful" P25) or debates by colleagues ("hashtags that help follow debates" P36). Other ways that digital environments highlighted triggers were recommendations presented with search results and links shared by others on social media. Enables Connections Digital environments that enable connections often presented our historians with new ways of looking at material. Word clouds and other types of visualizations enabled new associations between materials, as Participant 57 pointed out: "Interfaces that allow to see connections I wouldn't have thought of, like tag clusters. This seems to somehow recreate the effect of browsing the shelves or folders in a physical archive/library." (P57) Another feature of digital environments that historians indicated lead them to serendipitous finds were the algorithms for keyword searches in tools such as Evernote or DEVONthink that showed you material around the term searched for, instead of just that specific term. Because these tools allow a user to collect information from the Web and collate it in one location, when historians search, they know the information is relevant to their work. The feature they found most useful was the algorithm found and presented material, which, according to Participant 54 "Shows you what's CLOSE to what you were looking for." (P54) The participants reported that this allowed them to make connections from there. Heuristic Search Although participants reported relying on the algorithms to present information in meaningful ways, they also take it upon themselves to understand the tools they use in digital environments and learn to use them to their advantage, as Participant 64 indicated: "I think that test digital tools once and once again and by different ways, you can know the tools, find how use it and, if it is possible, adapt it to your needs." (P64) Search tools were one method of information seeking in the digital environment that many of our participants were used to manipulating. Some mentioned constantly changing their search terms, or purposefully misspelling names and places they searched for to get a wider variety of results, and therefore having a greater chance of experiencing serendipity. Participant 13 demonstrated this: "Key word searches are good, but you must be flexible with them and change the words until you get a strike. This is something like fly fishing." (P13) As historians do in physical libraries and archives, our participants used the digital tools available to them in ways that supported serendipity in their research. Leads to the Unexpected The unexpected was a very common term in these historians' definitions and stories of serendipity (Martin, 2016). However, it did not feature prominently amongst the features of a digital environment that the historians felt supported serendipity. Although there were a few historians who mentioned having "illuminating, and occasionally serendipitous conversations" on Twitter that took them to unexpected places (P38), it was largely the results of a find or a conversation that lead them in a new direction, not a feature that could be relied upon. It may have been difficult for the historians to think in terms of features that "Lead to the Unexpected" as users might not recognize that the digital environment is "fertile ground for unanticipated or surprising interactions" until after they have made a serendipitous connection (McCay-Peet et al., 2014). Trigger Rich Finally, we only found six references to digital environments that were "Trigger Rich," which were usually in passing, in phrases such as "Mostly just following hyperlinks" (P17). This does not necessarily mean that environments that include a lot of links to other material were not found to be serendipitous, because it seemed to us that these historians simply took for granted the links available on the web, and only drew attention to them when they were in useful or unexpected places, such as links to citations in online Works Cited sections of journal articles. Twitter was another place that could have been classified as being "Trigger Rich," as the information on this site is constantly changing, and links are provided here to other sources of information. However, here the historians predominantly mentioned the people they connected with through Twitter and how they followed conversations that interested them, rather than the preponderance of links available. Overall, the five facets of serendipity in a digital environment (McCay-Peet et al., 2014) served well as a classification structure for the historians' responses to Q21 and Q23. While there was some difficulty with classifying features of digital environments under the facet "Trigger Rich," this largely stemmed from historians' immersion in the online world, and their taking pages with many links for granted. It must be noted that we used these categories as a coding scheme, which is different from how McCay-Peet et al. (2014) employed them in their studies. The authors discerned five facets of serendipity and showed their connection to serendipity in the digital environment via concentrated statistical analyses. We expand this work not by further validating the established measures, but rather by using them as a framework for guiding our understanding of serendipity in the digital environment, which also allowed us to remain open to the creation of sub-codes where necessary. DISCUSSION We presented the findings of a preliminary analysis of historians' experiences with serendipity in digital environments. Our investigation of their comfort in these environments demonstrated a large rangewhile many participants were comfortable with digital tools that they used in their everyday lives (email, word processing and social media) there were only a small percent of the participants who reported to be comfortable writing code or using software development sites such as GitHub. Over half of the sample were comfortable using citation management tools such as Zotero or Endnote, as well as maintaining a blog. The variety of digital environments where historians worked was highlighted throughout our investigation of serendipity. Not only did participants describe themselves selecting their digital environment based on whether they felt it supported serendipity, but they also found various ways to make digital environments they chose to use more serendipitous for their research. For many this meant learning how to change their search terms to get fewer or more results, depending on their current need. In our previous paper, we used the term "heuristic" to describe the various methods that historians used to support elements of serendipity in digital environments (Martin & Quan-Haase, 2016, p. 1016. The descriptions of the features of serendipity in the present study provide further detail about the ways historians are working to support serendipity in their digital research environments. This led us to coin the term "Heuristic Serendipity", which we define here as: a process of information behavior in which historians use trial and error to create new, innovative methods of supporting serendipity throughout their research. For the participants of our current study, this type of heuristic serendipity usually took place on Google or on library interfaces, both digital environments in which participants indicated to be comfortable. Our participants often spoke of wanting search results that were "close to perfect," but not necessarily limited to a single, correct answer. To create results of this nature historians have started to manipulate their search tools and other digital environments they use for research. There are two main ways that our participants indicated doing this. First, they tried out a variety of digital tools until they found what works for them. What digital environment they use, and how advanced the features are within it will obviously be impacted by their comfort and level of technological expertise. Some historians mentioned generating visualizations, which would "somehow recreate the effect of browsing the shelves or folders in a physical archive/library" (P57), while others spoke of finding a research tool with an interface they preferred, which allowed them to keep their own personal database of research material. The second method of manipulating their search tools was to introduce flexibility into their searches, by including misspellings, wrong words and different combinations of terms. Several historians also mentioned that faceted or advanced search options allowed them to encounter things that they considered unlikely in other environments. Once they have obtained the results they were looking for, using either of the above methods of heuristic searching, the participants describe looking around this material in various ways. This form of information behavior was described much like other scholars have discussed browsing the stacks of a library (Björneborn, 2008;McKay, Smith, & Chang, 2014): searching around material, browsing through search results, etc. It is this information behavior that enables heuristic search to become heuristic serendipity. This is where historians' own ability to connect the dots between historical research materials comes into play, and their recognition of useful, enlightening or significant information can create a serendipitous experience. These skills are something that cannot be replaced by a single feature of a digital environment, which is one reason that historians are learning to control and manipulate these environments to suit their needs. Finally, we asked our participants about the various features of digital environments that they felt supported serendipity. We found that there was a wide variety of features that historians found to support serendipitous experiences; some of them were features of the environments themselves, while others were the results of historians' heuristic serendipity. Four features were prominent: those that enabled exploration (by supporting links to other material, or having full text access available), those that highlighted triggers (such as hashtags on social media, or highlighted materials as suggestions), those that allowed for keyword search (where historians could alter their search terms fluidly) and finally, those that connected them to other people. Dantonio et al. (2012) found that academics got the most out of Twitter when they were using it while taking a break from their research work, but our historians seemed to use the tool throughout their process, as a way of following along with conferences and engaging with other about their research. Participant 63 notes that it is the "constant flow of information" that helps support their serendipitous experiences. This use of Twitter aligns more closely with the serendipitous experiences that were reported in a study of Twitter use by digital humanities scholars (Quan-Haase, Martin, & McCay-Peet, 2015). These participants reported that the ubiquitous qualities of Twitter helped them to maintain awareness of new information in their research area. For our historians, it is not only the ubiquity of the Twitter interface, but also knowing that they exert control over its features and functions that helps to support serendipity in this particular digital environment. CONCLUSION Historians themselves are operationalizing serendipityremaining aware of the multiple ways to access information and then exerting control over their digital research environments to make serendipity possible. Just as historians of the past were trained to use libraries and archives to their fullest extent, digital historians must now be trained with the "critical awareness" that Solberg (2012) calls for; they must continue to recognize the strengths and weaknesses of the digital environment to continue to be agents in their own experiences with serendipity. FUTURE WORK Future work by the authors on this topic will include further integration of McCay-Peet's (2013) serendipity questionnaire, including a factor analysis to compare to her more recent findings (McCay-Peet et al., 2014). Now that we have made a significant step in understanding how serendipity plays a role in historians' research process, future work may include studies of other disciplines. Also, as this study benefitted from the knowledge of previous LIS studies on historians, using the results of the current study as a guide for future work on the use of technology by historians would help to show how historians' comfort level with technology, and uses of digital environments changes over time.
8,129.8
2017-01-01T00:00:00.000
[ "Computer Science" ]
KLOE Results on Hadron Physics and Perspectives for KLOE-2 The KLOE-2 experiment aims to enlarge and extend the KLOE physics programs with a larger data set and an upgraded detector. The KLOE detector [1] has collected 2.5 fb−1 at the e+e− collider DAΦNE [2], running at the peak of the φ resonance, Mφ ∼ 1020 MeV. An off-peak run provided also 250 pb−1 at 1 GeV. The detector consists of a large cylindrical drift chamber [3] and an electromagnetic calorimeter [4], surrounded by a magnetic field of 0.52 T. The trigger [5] uses information from both the calorimeter and the drift chamber. Collected data are analyzed by an event classification filter [6], which selects and streams various categories of events in different output files. A new beam crossing scheme, allowing for a reduced beam size and increased luminosity, is now operating at DAΦNE [7]. The KLOE-2 detector is collecting e+e− collision data at center of mass energy equal to Mφ, with the aim of a final data set of ∼5 fb−1. Four tag stations [8], the High Energy Taggers (HET) and the Low Energy Taggers (LET), have been installed to detect electrons and positrons from the reaction e+e− → e+e−γ∗γ∗ → e+e−X, to investigate γ∗γ∗ → π0/ππ/η/ηπ physics at the φ resonance. An inner tracker [9] has been installed between the beam pipe and the inner wall of the DCH to increase the acceptance for low transverse momentum tracks and improve charged vertex reconstruction. Photon detection has been improved by means of a small crystal calorimeter in the forward direction [10] and of a tungsten-scintillating tile sampling device instrumenting the low-beta quadrupoles located inside the detector [11]. A detailed description of the extended experimental physics program can be found in Ref. [12]. In this paper recent results obtained with KLOE data on the decay dynamics of η → π+π−π0 (Sec. 2) and on φ→ ηe+e−, φ→ π0e+e− transition form factors (Sec. 3) are reported. Perspectives on light meson spectroscopy at KLOE-2 are discussed in Sec. 4, while γγ physics at KLOE and KLOE-2 is described in Sec. 5. . Dalitz plot analysis of η → π + π − π 0 : data-MC comparison for the missing mass squared of the π 0 (left) and the opening angle between photons in the π 0 rest frame (right). The vertical lines represent the selection cuts. difference [13]. The Dalitz plot density is commonly described by a polinomial expansion in the X and Y variables In these relations, T is the kinetic energy of the pions in the η rest frame while Q η = m η − 2m π ± − m π 0 . The squared amplitude of the decay can then be extracted with a fit: |A(X, Y)| 2 N(1 + aY + bY 2 + cX + dX 2 + eXY + f Y 3 + gX 2 Y + hXY 2 + lX 3 + ...) In 2008, the KLOE experiment measured for the first time the Dalitz Plot parameters up to the term f (KLOE08) [14]. These results have been recently confirmed, although with less precision, by the WASA and BESIII experiments [16,17]. A new measurement has been carried out with KLOE data (KLOE16), with an independent and ∼ 4 times larger data set (1.7 fb −1 ), a new analysis scheme and an improved Monte Carlo simulation, providing with improved accuracy the parameters of the decay matrix [15]. In KLOE, light mesons are produced via radiative decays of the φ and are tagged by identifying the recoil monochromatic photon, E recoil . The η → π + π − π 0 selection requires two additional prompt neutral clusters from the π 0 and two tracks with opposite curvature in the drift chamber pointing to the IP. Decay kinematics is then exploited to constrain E recoil and to assign photons to π 0 . Background scaling factors are obtained by fitting data with MC distribution for two variables: the missing mass squared of the π 0 and the opening angle between photons in the π 0 rest frame ( Fig. 1). Cuts on these variables are used to reduce the background contamination. The resulting efficiency for signal events is 37.6%, with a background contamination less than 1%. The resulting Dalitz plot density has been fitted, after background subtraction, with the complete third order polinomial expansion of Eq. (3), folded with smearing matrix and analysis efficency. Bin size is about three times the X, Y resolution. Fitting with the whole polinomial expansion, the c, e, h and l parameters are consistent with zero, as expected from C-invariance. Fixing them to zero and comparing with the previous KLOE measurement (Tab. 1), the statistical uncertainty is reduced by about a factor of two, while improving also the systematic uncertainties, which are in some cases reduced by a factor of 2 ÷ 3. The major improvement in the systematic uncertainties comes from the analysis of the effect of the event classification with an unbiased prescaled data sample. When Comparison of a, b, d, f parameters extracted from the fit to the Dalitz plot distribution of η → π + π − π 0 decay. KLOE results [14,15] are compared with recent measurements from WASA [16] and BESIII [17]. the g parameter is included in the fit, its value is different from zero at 3 σ level, improving the χ 2 probability from 24% to 56%. Comparison of KLOE, WASA and BESIII results is reported in Fig. 2, showing a good agreement among experimental results, with KLOE16 being the most precise one. Table 1. Fit results for η → π + π − π 0 Dalitz plot analysis, compared to previous KLOE measurement. The smearing matrix of the η → π + π − π 0 Dalitz plot is very close to diagonal. For this reason, acceptance corrected data have been used to directly fit Eq. (3). The extracted parameters are in agreement with the values obtained using the whole smearing matrix. Dalitz plot acceptance corrected data is provided as supplementary material in [15]. The unbinned integrated left-right (A LR ), quadrant (A Q ) and sextant (A S ) charge asymmetries provide a more sensitive test of C parity conservation with respect to the fit to the Dalitz plot. The values extracted from the analysis of the new KLOE data set are consistent with zero at 10 −4 level, thus improving existent evaluations [18][19][20]. Systematic uncertainties are of the same size of the statistical ones except for A LR , where the error is dominated by the description of the Bhabha background. Experimental results are reported in Tab. 2. Dalitz decays of the φ meson Vector Meson Dominance (VMD) model fails to describe the vector to pseudoscalar transition form factor (TFF) for the process ω → π 0 µ + µ − , as measured by the NA60 collaboration [21]. The only other existing experimental result for VPγ * TFF comes from the SND experiment, which has measured the M ee invariant mass distribution of the φ → ηe + e − decay on the basis of 213 events [22], providing too large statistical error to confirm the NA60 evidence. New measurements of VPγ * transitions are therefore needed. A detailed study of the φ → ηe + e − and φ → π 0 e + e − decays has been performed with 1.7 fb −1 of KLOE data [23,24]. The The φ → ηe + e − decay has been studied using the η → π 0 π 0 π 0 final state. Preselection cuts require: (i) two tracks of opposite sign originated from the interaction point (IP) plus six prompt photon candidates; (ii) a loose cut on the six photon invariant mass: 400 < M 6γ < 700 MeV; (iii) a 3 σ cut on the recoil mass against the e + e − pair, M recoil (ee). A residual background contamination, due to φ → ηγ events with photon conversion on beam pipe (BP) or drift chamber walls (DCW), is rejected by extrapolating back to BP/DCW surfaces the tracks of the e + , e − candidates and then reconstructing their invariant mass and distance. Both quantities are small for the events coming from photon conversion. φ → KK and φ → π + π − π 0 events surviving analysis cuts have more than two pions in the final state. They are rejected using time-of-flight to the calorimeter. When an EMC cluster is connected to a track, the arrival time to the calorimeter is evaluated both with calorimeter (T cluster ) and drift chamber (T track ) information. Events with an e + , e − candidate outside a 3 σ's window on the DT = T track − T cluster variable are rejected. Comparison between data and Monte Carlo events at different steps of the analysis is reported in Fig. 3. At the end of the analysis chain, 30,577 events are selected, with a residual background contamination of ∼ 3%. After background subraction, the measured branching fraction for the φ → ηe + e − process is: much more precise compared with the present PDG average of (1.15 ± 0.10) × 10 −4 . The slope of the transition form factor, b φη , has been obtained from a fit to the di-lepton invariant mass using the differential cross section from Ref. [25] and the transition form factor in one-pole parametrization: where the slope parameter is defined as b = [dF(q 2 )/dq 2 ]| q 2 =0 = Λ −2 . This results in: in agreement with VMD predictions (b φη = 1 GeV −2 ). Fit results are reported in Fig. 4 left. The squared modulus of the transition form factor, |F φη (q 2 )| 2 , as a function of the e + e − invariant mass (Fig. 4 right) has been obtained by dividing the M ee spectrum bin by bin with the corresponding distribution obtained for MC events generated with a constant transition form factor. The value of b φη extracted from the fit is in agreement with Eq. (6). The No data are available on the transition form factor of the φ → π 0 e + e − decay. Differently from the φ → ηe + e − channel, a large background contamination is still present after preselection cuts, requiring two tracks and two photon candidates in the final state. Dedicated analysis cuts strongly reduce the main background component of Bhabha scattering events to ∼ 20%, which dominates for M ee > 300 MeV (Fig. 5 left). The other residual relevant background contribution is from φ radiative decays. At the end of the analysis, about 14,500 events are selected, with a total background contamination of ∼ 30%. Data-MC comparison is shown in Fig. 5 for different kinematical variables. The background contribution is removed bin-by-bin by subtracting the fits to each single background component from data points in the M recoil (ee) distribution (Fig. 6 left). The branching ratio of the available q 2 range has been obtained from the background subtracted e + e − mass spectrum by applying an efficiency correction evaluated bin by bin: Prob(c 2 ) = 12.6% Figure 4. Left: fit to the di-lepton invariant mass for the φ → ηe + e − decay channel. Right: φη form factor as a function of the di-lepton invariant mass. The blue curve is the fit result, with its uncertainty; expectations from VMD and Ref. [26] are reported in red and pink, respectively. Figure 5. Analysis of the φ → π 0 e + e − decay channel: data-MC comparison for the di-lepton invariant mass (left) and the cos(ψ * ) variable (right) at the end of the analysis chain. The first error includes the statistical and normalization errors, while the second one is due to systematics on analysis cuts and background subtraction. This result can be extrapolated to the full q 2 range by using the teoretical description of the decay that best fit our transition form factor data [27]: The di-lepton invariant mass after background subtraction and efficiency correction is reported in Fig. 6 right and compared with theoretical predictions [27][28][29]. The slope of the transition form factor has been extracted by fitting this curve in the one-pole appoximation: b φπ 0 = (2.02 ± 0.11) GeV −2 . Perspectives from e + e − interactions with KLOE-2 data The KLOE-2 detector is taking data at the update DAΦNE e + e − collider, aiming to collect 5 fb −1 at the φ peak. This larger data sample will allow a deeper investigation of ligth meson properties, decay dynamics and transition form factors. About 1.5 × 10 10 φ mesons will be produced, which will be used to complete and extend the study of the transition form factors from the Dalitz decays of vector mesons. As an example, the first evidence of the φ → ηπ + π − and φ → ηµ + µ − decays are expected at KLOE-2. Currently, both decays have been searched by the CMD-2 experiments, that set an upper limit of [30,31]: A sample of ∼ 2.5 × 10 6 ω mesons will be also produced through the Initial State Radiation (ISR) process e + e − → ωγ ISR . This will provide a large statistics data sample for the study of the ω → π + π − π 0 decay dynamics. The peak obtained with 300 pb −1 of KLOE data is visible in Fig. 7 left. It has been selected by requiring two track and three photons in the final state and applying a kinematic fit. No specific background rejection has been performed. Radiative decays of the φ mesons will provide the largest data set of η meson: additional 2.5 × 10 8 will be produced in 5 fb −1 . A very clean η sample is tagged by means of the monocromatic recoil photon, 363 MeV, which is clearly identified as the most energetic neutral cluster in case of three or more particles in the final state (see Fig. 7 right). This data will be used to improve existing limits on η violating modes, listed in Tab. 3. Existing limits and expectations combining KLOE and KLOE-2 data are also reported. KLOE-2 can improve the knowledge of η decays in four charged particles, listed in Tab. 4. These decays probe the structure of the η meson [25] and test the theoretical predictions of the branching ratio [39][40][41][42]. Moreover, the π + π − e + e − final state could reveal CP violation beyond the prediction of the Standard Model by measuring the angular asymmetry A φ between pions and electrons decay planes [43]. KLOE has already measured the first two processes of Tab. 4, obtaining the most precise measurement of the BRs and the first measurement of A φ with a statistical sensitivity of 2.5 × 10 2 [37,38]. The larger KLOE-2 data set, together with the increased track acceptance of the Inner Tracker, will improve all these measurements, allowing also to get the first evidence for the decay η → µ + µ − e + e − . γγ interactions at KLOE and KLOE-2 The γγ couplings and partial widths of mesons provide information about their structure and can be measured in the e + e − → e + e − γ * γ * → e + e − X processes, Fig. 8, where X is a generic J PC = 0 ±+ , 2 ±+ final state. In case of a single particle R produced in the final state, the cross section of the decays is proportional to the transition form factor: A precise knowledge of the γγ decay width provides a measurement of the transition form factor, which is very important for the determination of the hadronic light-by-light contribution to the anomalous magnetic moment of the muon. Measurement of Γ(η → γγ) at KLOE At KLOE, where there is no tagging of the outgoing e + e − , γγ interactions have been studied using off-peak data (240 pb −1 collected at √ s = 1 GeV), to avoid backgrounds from φ decays. The η partial width, Γ(η → γγ), is extracted from the measurement of the e + e − → e + e − η cross section, using both neutral and charged η → πππ decay channels [48]. The main background is due to the e + e − → ηγ reaction, with an undetected recoil photon. After reducing the background components with specific kinematical cuts, signal events are extracted by fitting with the expected Monte Carlo components the two-dimentional plot M 2 miss -p L/T (Figs. 9), where M 2 miss is the squared missing mass and p L/T is the η miss distribution (right) for γγ → η → π 0 π 0 π 0 events. Bottom: η tranverse momentum (left) and M 2 miss distribution (right) for γγ → η → π + π − π 0 events. Points with error bars are data, black solid histograms are fit result. Different fit components are reported in colors. The π 0 Transition Form Factor at KLOE-2 The upgrade of the KLOE-2 detector, with four detectors installed to tag electrons and positrons from the reaction e + e − → e + e − γ * γ * → e + e − X, will give the opportunity to investigate γγ physics also at the φ resonance for the reactions γγ → π 0 /ππ/η/ηπ [12]. Single pseudoscalar production will improve the determination of the two-photon decay widths of these mesons, Γ γγ . For the π 0 , the most precise measurement is obtained exploiting the Primakoff effect, reaching an accuracy of 2.8% [49]. At KLOE-2, the coincidence between the KLOE central detector and the HET taggers will provide a very clean sample of ∼ 1900 γγ → π 0 events per fb −1 , with background from radiative Bhabha scattering events being rejected by using the coincidence between the central detector and the HET stations. An accuracy of 1% on Γ γγ (π 0 ) is reachable with 5-6 fb −1 , matching the current theory precision (Fig. 10 left). With the same amount of data, the measurement of the π 0 → γγ * transition form factor in the space-like region at low momentum transfer for the virtual photon will be possible with 5-6% accuracy. The KLOE-2 measurement will cover an unexploited region of the momentum transfer, as shown in Fig. 10 right. For the form factor measurement, the coincidence between the central detector and one of the HET stations will be used. These two proposed measurements are relevant for the theoretical evaluation of the hadronic light-by-light contribution to the muon magnetic anomaly, that is limited by the knowledge of the pseudoscalar transition form factor [50]. Figure 10. Left: experimental measurements of Γ(π 0 → γγ) (black dots) and predictions from χPT with (black line) and without (blue dotted line) chiral anomaly. The theoretical error band is displayed in grey. Right: KLOE-2 expectations for π 0 γγ * TFF (red triangles), compared with existing measurments from CELLO [51] (black triangles), CLEO [52] (blue squares) and Babar [53] (green triangles). Conclusions The large data sample of light mesons collected with the KLOE detector provided several state-ofthe-art measurements on the properties of light scalar, pseudoscalar and vector mesons. In the last year, the most precise measurements of the η → π + π − π 0 decay dynamics and of the φ → ηe + e − , φ → π 0 e + e − transition form factors have been published. Currently, the KLOE-2 run is in progress aiming to collect at least 5 fb −1 within the first months of year 2018. This data sample will allow to extend the high precision investigation of light meson properties. KLOE-2 will be an ideal tool to study in details the π 0 and η transition form factor at momentum transfer below 1 GeV 2 .
4,695.6
2018-01-01T00:00:00.000
[ "Physics" ]
The stochastic gravitational-wave background in the absence of horizons Gravitational-wave astronomy has the potential to explore one of the deepest and most puzzling aspects of Einstein’s theory: the existence of black holes. A plethora of ultracompact, horizonless objects have been proposed to arise in models inspired by quantum gravity. These objects may solve Hawking’s information-loss paradox and the singularity problem associated with black holes, while mimicking almost all of their classical properties. They are, however, generically unstable on relatively short timescales. Here, we show that this ‘ergoregion instability’ leads to a strong stochastic background of gravitational waves, at a level detectable by current and future gravitational-wave detectors. The absence of such background in the first observation run of Advanced LIGO already imposes the most stringent limits to date on black-hole alternatives, showing that certain models of ‘quantum-dressed’ stellar black holes can be at most a small percentage of the total population. The future LISA mission will allow for similar constraints on supermassive black-hole mimickers. I. INTRODUCTION According to General Relativity and the Standard Model of particle physics, dark and compact objects more massive than ≈ 3M must be black holes (BHs) [1].These are characterized by an event horizon, causally disconnecting the BH interior from its exterior, where observations take place.In classical gravity, BHs can form from gravitational collapse [2], providing a sound and compelling theoretical support for their existence.When quantum effects are included at semiclassical level, however, BHs are not completely "dark", but evaporate by emitting thermal black-body radiation [3].For astrophysical BHs, evaporation is negligible.Therefore, BHs are commonly accepted to exist -with masses in the ranges from ∼ 10M to ∼ 60M (and perhaps larger mass), and from ∼ 10 5 M to ∼ 10 10 M -and play a fundamental role in astronomy and astrophysics [4]. It is sometimes not fully appreciated that BHs are truly "holes" in spacetime, where time "ends" and inside which the known laws of classical physics break down [5].Furthermore, the classical concept of an event horizon seems at clash with quantum mechanics, and the very existence of BHs leads to unsolved conundra such as information loss [5].Thus, in reality, the existence of BHs is an outstanding event, for which one should provide equally impressive evidence [6][7][8].Over the last decades several alternatives and arguments have been put forward according to which -in a quantum theory -BHs would either not form at all, or would just be an ensemble of horizonless quantum states [9][10][11][12][13][14][15]. From a theoretical standpoint, BHs therefore lay at the interface between classical gravity, quantum theory and thermodynamics, and understanding their nature may provide a portal to quantum gravity or other surprises.In the formal mathematical sense, it is impossible to ever show that BHs exist, since in General Relativity their definition requires knowledge of the whole spacetime, including the future [2].However, the newborn gravitational-wave (GW) astronomy allows us to constrain alternatives to BHs to unprecedented level.GW de-tectors can rule out a wide range of models, through observations of inspiralling binaries or the relaxation of the final object forming from a merger [6][7][8][16][17][18].Here, we explore one significant effect that follows from the absence of the most salient feature of a BH, the event horizon.We will show that compact, horizonless spinning geometries would fill the universe with a background of GWs detectable by current and future instruments, through a classical process known as the "ergoregion instability" [19,20]. II. ERGOREGION INSTABILITY In Einstein's theory, the unique globally vacuum astrophysical solution for a spinning object is the Kerr geometry.It depends on two parameters only: its mass M and angular momentum J = GM 2 χ/c, with G Newton's constant, c the speed of light and |χ| ≤ 1 a pure number.The compact, dark objects in our universe could depart from the Kerr geometry in two distinct ways.The near-horizon structure might change significantly, while retaining the horizon [21][22][23].In coalescing binaries, such effects can be probed by GW measurements of the quadrupole moment, the tidal absorption and deformability [7,8,24,25], and especially the quasinormal oscillation modes of the remnant object [26,27].Here we explore a second (and more subtle) scenario, where the geometry is nearly everywhere the same as that of a BH, but the horizon is absent.Two smoking-gun effects arise in this scenario.First, the late-time ringdown consists generically of a series of slowly damped "echoes" [6,8,25,28].Furthermore, by working as a one-way membrane, horizons act as a sink for external fluctuations, including those inside the ergoregion, where negativeenergy states are possible [20,29].Such states are typically associated with instabilities: their existence allows scattering waves to be amplified, i.e. positive-energy perturbations can be produced, which can travel out of the ergoregion.Energy conservation then requires the negative-energy states inside the ergoregion to grow.In a BH, this piling up can be avoided -80 -60 -40 -20 0 20 40 A out e i ω z +A in e -i ω z FIG. 1. Schematic potential for a non-spinning ultracompact object, as a function the tortoise radial coordinate (in practice, the coordinate time t of a photon).For object radii r0 ∼ r+, the radiation travel time from the photosphere to the surface scales approximately as t0 ∼ tH | log |.The travel time from the surface to the interior is parametrized as tinterior. by dumping the negative energy into the horizon, thus stabilizing the object.In the absence of a horizon, instead, this process leads to an exponential cascade.As a consequence, spinning BHs are linearly stable, but any horizonless object sufficiently similar to a rotating BH is unstable [19,[30][31][32]. A. Canonical model: perfectly reflecting surface We start by the simplest model of horizonless geometries: a compact body whose exterior is described by the Kerr metric, and with a perfectly reflective surface.This spacetime defines a natural cavity, i.e. the region between the object's surface and the potential barrier for massless particles (the "photon sphere") [see Fig. 1].In this cavity, negative-energy modes, and thus instabilities, can be excited [15,33].The dynamics is controlled by two parameters: the size of the cavity and the object's angular velocity, which determine how fast the instability grows.The cavity size can be parametrized by the light travel time t 0 (as observed at infinity) between the photon sphere and the object's surface [6][7][8]20].The timescale t 0 also defines a set of possible modes, with fundamental frequencies ω = ω R + iω I and ω R ∼ π/t 0 .The instability is controlled by the amplification factor |A| 2 of the ergoregion at this frequency [20], i.e. ω I ∼ |A| 2 /t 0 .This follows from a very generic "bounce-and-amplify" argument, which was shown to accurately describe specific models [15,20]. A test scalar field Φ in such a geometry grows exponentially with time, Φ(t) ∼ e t/τ , with τ ≡ 1/ω I .The characteristic unstable modes can be computed numerically and agree well with bounce-and-amplify estimates [15].When t 0 t H ≡ GM /c 3 , these modes are well described by [15,[32][33][34] where ∆ = 1 − χ 2 , Ω(χ) is the object's angular velocity, β for a wave with angular number l [20,35], and p is an even (odd) integer for Dirichlet (Neumann) boundary conditions at the surface.Thus, the spacetime is unstable for ω R (ω R − mΩ) < 0 (i.e., in the superradiant regime [20]), on a timescale τ ≡ 1/ω I .Extensions of such calculations for electromagnetic fields with different boundary conditions at the surface shows that the simple bounceand-amplify estimate is a good upper bound to the instability timescale [36].Moreover, the amplification factors of scalars are orders of magnitude smaller than those of GWs [20,35].To be conservative, we therefore use the scalar results to estimate the background of GWs. Equations ( 1) and ( 2) are valid for any object able to completely reflect the incoming radiation.In particular, if the object's surface sits at a constant (Boyer-Lindquist) radius where r + = GM (1 + ∆)/c 2 is the location of the (wouldbe) event horizon in these coordinates, then the travel time reads [6,8,25] Several different arguments about the magnitude of can be made.If quantum-gravity effects become important at Planck timescales t P = G/c 5 , it is natural to set = t P /t H ∼ 10 −39 − 10 −46 for stellar-mass to supermassive dark objects.These objects were dubbed ClePhOs in Ref. [15], and are impossible to rule out in practice via electromagnetic observations [8,15]. B. Modelling the interior The above description effectively decouples the outside geometry from the inside, and is accurate when the flux across the surface vanishes.However, some models may have important transmittance.There are thus three different scenarios that need to be discussed in the general case: i.The object does not dissipate, and the light travel time inside the object is small (i.e., t interior ∼ t H t 0 ) .This situation describes most of the known models available in the literature.In such a case, the geometric center of the star effectively works as a perfectly reflecting mirror (i.e.ingoing radiation from one side exits on the other side with negligible delay), and the previous results (1)-(4) still apply. ii.The object does not dissipate, and t interior t 0 .For these objects, a model of the interior is necessary, because even though all ingoing radiation will exit on the other side, delays/scattering due to the propagation in the interior will be important.To describe this case in a model-independent way, we assume that (1)-( 2) continue to apply, and we attempt to constrain the parameter t 0 directly, without assuming Eq. ( 4). iii.The object dissipates radiation in its interior.In this case, the instability may be completely quenched if the absorption rate is large [34].For highly spinning objects, this requires at least 0.4% absorption rate for scalar fields, but up to 100% absorption rate for gravitational perturbations and almost maximal spins [20,37].While these numbers reduce to 0.1% for spins χ 0.7, they are still several orders of magnitude larger than achievable with viscosity from nuclear matter [34]. Based on the above arguments, we expect the following results to cover all relevant models of BH mimickers. C. Evolution of the instability The evolution of the object's mass and angular momentum, under energy and angular momentum losses, can be computed within the adiabatic approximation (because τ t H ) [38].The unstable mode is simply draining energy and angular momentum from the object, which we assume to have an equation of state such that = const during the evolution.From energy and angular momentum conservation, the evolution equations for each mode read where Ė0 encodes the initial preturbation of the (unstable) system.Since the instability is exponential, the overall evolution is insensitive to the precise value of these initial conditions.The equations above are valid for a monochromatic mode in a generic stationary and axisymmetric background. The energy flux can be written as dE/df = Ė/ ḟ , where f = ω R /(2π) is the frequency associated with the mode. From the evolution equations, we can evaluate Ṁ and J and, in turn, ωR .To leading order in the → 0 limit, we obtain ω R ∼ mΩ, and valid for any angular numbers l and m.In the same limit, the critical value of spin above which the ergoregion instability occurs reads [34] Thus, if χ(t = 0) > χ crit , the instability removes energy and angular momentum until superradiance is saturated, i.e. χ(t τ ) → χ crit .Note that the small spin value χ crit is compatible with the low measured spins of inspiralling compact objects detected via GWs so far [39].Since we are interested gravitational perturbations, when solving the evolution equations ( 5) and computing the energy flux (6) we only consider the dominant l = m = 2 mode and neglect highermodes.Note also that the above analysis assumes that the backreaction of these fields on the geometry is negligible.Our results indicate that those are always reliable approximations. D. GW stochastic background A population of GW sources too far and/or weak to be detected individually may still give rise to a "stochastic" background detectable by a network of interferometers, e.g. the LIGO/Virgo network, sensitive to frequency ranging from ∼ 10 to ∼ 100 Hz [40,41]; the Pulsar-Timing-Array experiments [42][43][44][45][46], which are already constraining backgrounds at frequencies ∼ 10 −9 − 10 −6 Hz; and the future LISA constellation [47], which will be sensitive to frequencies between 10 −6 Hz and ∼ 1 Hz.The background is produced by the incoherent superposition, at the detector, of the GW signals from all the unresolved sources in the population.The background can be characterized either by (i) its (dimensionless) energy spectrum (ρ gw being the background's energy density, f o the frequency measured at the detector and ρ c the critical density of the Universe at the present time), obtained by summing the energies emitted by all the unresolved sources in a given frequency bin [48]; or (ii) directly by the characteristic strain h c (f o ) observed in the detector, which can be obtained summing in quadrature (and binning in frequency) the strain amplitudes of all the unresolved sources [49].The two quantities are related by , where H 0 ≈ 68km/(s Mpc) is the Hubble rate.While these two ways of computing the background signal are equivalent [49], we have implemented both as a consistency check of our results.This also allows us to check that the number of sources contributing in each frequency bin is typically large as long as the bin size is 0.01 dex in the LISA band (which ensures that the number of sources contributing 99% of the signal in each bin ranges from thousands to millions).In the LIGO band, sources are even more numerous: frequency bins 0.01 dex yield 10 9 -10 14 sources contributing 99% of the signal in each bin.(These are mostly extragalactic sources, as Galactic ones give a negligible contribution to the background.)This in turn implies, in particular, that the background is expected to be smooth with that frequency binning [49].When computing the stochastic background of unstable exotic compact objects, the energy flux given by Eq. ( 6) is defined in the frequency range f ∈ [f min , f max ]; f max can be computed using Eq. ( 1) for a given initial mass and spin of the compact object, and f min is computed by solving the evolution equations ( 5) from the formation redshift of the compact object to the present time. For the astrophysical populations of isolated BHs, we adopt the same models as in [48,53].For stellar-origin BHs we account for both Galactic and extragalactic BHs that form from the core collapse of massive ( 20M ) stars, by tracking the cosmic star formation history and the metallicity evolution of the Universe [54].We assume a uniform distribution for the initial spins with χ ∈ [0, 1] as the most optimistic and χ ∈ [0, 0.5] as the most pessimistic scenario.For the massive (∼ 10 4 − 10 7 M ) and supermassive (∼ 10 8 − 10 10 M ) BHs that emit respectively in the LISA and PTA bands, we adopt the semi-analytic galaxy-formation model of Ref. [55] (with later incremental improvements described in [56][57][58]), which follows the formation of these objects from their highredshift seeds and their growth by accretion and mergers.This growth is triggered in turn by the synergic co-evolution of the The black lines are the power-law integrated curves of [50], computed using noise PSDs for LISA with one year of observation time [47], LIGO's first observing runs (O1), LIGO at design sensitivity as described in [51], and an SKA-based pulsar timing array as described in [52].By definition, ρ stoch > 2 (ρ stoch = 2) when a power-law spectrum intersects (is tangent to) a power-law integrated curve.2, for an agnostic model for the compact-object (dissipationless) interior, where the light travel time t0 between the light ring and the surface is a free parameter. BHs with their host galaxies, of which we evolve both the various baryonic components and the dark-matter halos.This model is optimistic since it predicts a spin distribution skewed towards large spins, at least at low masses.To include astrophysical uncertainties in our computation, we also consider models in between our most optimistic and most pessimistic assumptions as described in Ref. [48] (see Section III therein). III. RESULTS Our main results for the GW stochastic background from exotic compact objects are shown in Fig. 2 in the frequency bands relevant for LIGO/Virgo (left panel) and for LISA/an SKA-based pulsar timing arrays (right panel).The left panel suggests that the absence of a stochastic background in LIGO O1 already rules out our canonical model even for conservative spin distributions, while LIGO at design sensitivity will be able to rule out our canonical model even in more pessimistic scenarios than those assumed here, e.g. even if all BH-like objects had initial spin χ < 0.2.Similar results apply in the LISA band, whereas the stochastic signal is too small to be detectable by pulsar timing arrays, even in the SKA era. The level of the stochastic background shown in Fig. 2 can also be understood with an approximate analytic calculation [53].The BH-mimicker mass fraction lost to GWs due to superradiance is F sr ∼ O(1%) [48,59].Because the signal spans about a decade in frequency [c.f.Eq. ( 1)], ∆ ln f ∼ 1, and Ω GW, sr = (1/ρ c )(dρ GW /d ln f ) ∼ F sr ρ BH /ρ c , with ρ GW and ρ BH the GW and BH-mimicker energy densities.In the mass range 10 4 − 10 7 M relevant for LISA, ρ BH ∼ O(10 4 )M /Mpc 3 , which gives Ω LISA GW, sr ∼ 10 −9 .To estimate the background in the LIGO band, note that the background from ordinary BH binaries is Ω GW, bin ∼ η GW F m ρ BH /ρ c , with η GW ∼ O(1%) the GW emission efficiency for BH binaries [60], and F m ∼ O(1%) [54] the fraction of stellar-mass BHs in merging binaries.This gives Ω GW, sr /Ω GW, bin ∼ F sr /(η GW F m ) ∼ 10 2 , and because the LIGO O1 results imply Ω GW, bin 10 −9 − 10 −8 [51,61], we obtain Ω LIGO GW, sr 10 −7 − 10 −6 .As shown in Fig. 3, LIGO/Virgo and LISA are also able to place model-independent constraints on the stochastic signal from exotic compact objects.At design sensivity, LIGO/Virgo can detect or rule out any model with t 0 < 10 8 t H , whereas LISA can go as far as t 0 < 10 6.5 t H .In other words, objects across which light takes 10 8 or less dynamical timescales to travel are ruled out.Finally, we note that although LIGO/Virgo rule out a wider range of the parameter space compared to LISA, it is still interesting to consider the constraints from both detectors since they probe different BH populations. IV. DISCUSSION Our results suggest that the current upper limits on the stochastic background from LIGO O1 already rule out the simplest models of BH mimickers at the Planck scale, setting the strongest constraints to date on exotic alternatives to BHs.The most relevant parameter for our analysis is the light travel time within the object, t 0 .LIGO/Virgo (LISA) can potentially rule out models where t 0 < 10 8 t H (t 0 < 10 6.5 t H ).These results are not significantly dependent on the astrophysical uncertainties of the extragalactic BH distributions, and even if exotic compact objects are produced all at low spin, the order of magnitude of our constraints is unaffected. We conclude that all present and future models of exotic compact objects should either conform with t 0 10 8 t H or represent at most a fraction X of the compact-object population, the remaining being BHs.Since Eq. ( 8) scales linearly with X, Fig. 2 implies that the O1 upper limits impose X < 50% even if all BH-like objects are formed with low spins.At design sensitivity, these constraints could improve to X < 1%. It might be possible to evade these constraints, by incorpo-rating some (exotic and still unclear) mechanism quenching the ergoregion instability, e.g.absorption rates several orders of magnitude larger than those of neutron stars [34].While such quenching mechanism might result in thermal or quasithermal electromagnetic radiation (which can be constrained by electromagnetic observations of BH candidates [62,63]), quantum-dressed BH mimickers might evade such constraints by trapping thermal energy in their interiors for very long timescales [8,15].Our results also imply that in the simplest models of nondissipative exotic ultracompact objects, GW echoes [6,8,25] can appear in the post-merger phase only after a delay time τ echo ∼ t 0 10 8 t H ≈ 10 4 [M/(20M )]s, which is much longer than what was claimed to be present in LIGO/Virgo data [64][65][66] (see also [67]).The latter claims would not be in tension with our bounds on the GW stochastic background only if one postulates exotic objects that are dissipative enough to absorb at least O(0.1)% of gravitational radiation (which is several orders of magnitude more than what typically achievable with nuclear matter).In this case, the stochastic background from a population of "echoing" merger remnants might still be detectable [68]. FIG. 2 . FIG. 2. Extragalactic stochastic background for the canonical model in the LIGO/Virgo (left panel), LISA and PTA bands (right panel).The blue band brackets our population models (from the most pessimistic to the most optimistic, as explained in the main text).The background depends very weakly on as long as t0 ∼ tH | log | 10 5 tH , so here we show only the case t0 ∼ tH | log 10 −40 |.The black lines are the power-law integrated curves of[50], computed using noise PSDs for LISA with one year of observation time[47], LIGO's first observing runs (O1), LIGO at design sensitivity as described in[51], and an SKA-based pulsar timing array as described in[52].By definition, ρ stoch > 2 (ρ stoch = 2) when a power-law spectrum intersects (is tangent to) a power-law integrated curve. t 0 FIG. 3 . FIG.3.Same as in Fig.2, for an agnostic model for the compact-object (dissipationless) interior, where the light travel time t0 between the light ring and the surface is a free parameter.
5,202.4
2018-05-21T00:00:00.000
[ "Physics" ]
Detection of Cellular Senescence in Human Primary Melanocytes and Malignant Melanoma Cells In Vitro Detection and quantification of senescent cells remain difficult due to variable phenotypes and the absence of highly specific and reliable biomarkers. It is therefore widely accepted to use a combination of multiple markers and cellular characteristics to define senescent cells in vitro. The exact choice of these markers is a subject of ongoing discussion and usually depends on objective reasons such as cell type and treatment conditions, as well as subjective considerations including feasibility and personal experience. This study aims to provide a comprehensive comparison of biomarkers and cellular characteristics used to detect senescence in melanocytic systems. Each marker was assessed in primary human melanocytes that overexpress mutant BRAFV600E, as it is commonly found in melanocytic nevi, and melanoma cells after treatment with the chemotherapeutic agent etoposide. The combined use of these two experimental settings is thought to allow profound conclusions on the choice of senescence biomarkers when working with melanocytic systems. Further, this study supports the development of standardized senescence detection and quantification by providing a comparative analysis that might also be helpful for other cell types and experimental conditions. Introduction Cellular senescence describes a stable state of growth arrest, commonly accompanied by ample molecular and phenotypical changes. In the decades since its first description by Hayflick and Moorhead in 1961 [1], cellular senescence has been linked to numerous physiological and pathological conditions, ranging from developmental processes to neurodegenerative diseases and cancer [2]. In each of these conditions, establishment of senescence is caused by one of three major mechanisms: telomere shortening, oncogene activation, or extensive DNA damage [3]. When it comes to melanocytes and malignant melanoma, two of these mechanisms are of special importance: first, oncogene-induced senescence (OIS) as a central feature of melanocytic nevi that prevents further oncogenesis and malignant transformation of such benign lesions [4]. As melanocytic nevi have been causally linked to development of malignant melanoma [5,6], stabilization of OIS or clearing of senescent melanocytes remain an important and promising approach in preventing tumorigenesis [7,8]. Second, DNA damage-induced senescence is an important part of therapeutic treatment of malignant melanoma. The majority of cytotoxic treatments cause DNA damage to induce apoptosis and senescence, thereby halting tumor growth [9]. However, growing evidence was found that senescent cancer cells might become therapy-resistant, resulting in residual tumor masses and potentially recurrent malignancies [10,11]. Consequently, there is a need for efficient and reliable detection and targeting of senescent cells. Although fundamental hallmarks of cellular senescence are conserved in most experimental and clinical conditions, the exact phenotype is often variable and affects the reliability of biomarkers [12]. In addition, the majority of these markers are not completely specific for cellular senescence, e.g., growth arrest and activation of DNA damage response [12,13]. Melanocytes introduce another potential problem since they physiologically show high levels of lysosomal beta-galactosidase activity [14]. This potentially interferes with detection of senescence-associated beta-galactosidase (SA-β-Gal) [15], one of the most widely used markers of cellular senescence [16], and needs to be taken into consideration. The aim of this study is to comprehensively evaluate biomarkers of cellular senescence for their use in primary melanocytes and melanoma cells. It is thought to support the ongoing discussion on the choice of the best markers, especially in the complex field of melanoma research, and thereby improving the reliability and reproducibility of senescence detection. Cell Culture Normal human melanocytes (NHEM, neonatal) were obtained from Lonza and cultivated in MGM-4 BulletKit medium (Lonza, Basel, Switzerland) with 1% penicillin/ streptomycin. NHEM from different donors were used between passages 6 and 8. Cells from the same donor, but at a different passage, were considered biological replicates. HEK293T cells for transduction were a generous gift from Prof. Stephan Hahn (Ruhr-Universität Bochum, Germany). Their cultivation required high-glucose Dulbecco's modified Eagle's medium (DMEM) with 10% FCS and 1% penicillin/streptomycin. Both NHEM and HEK293T cells were incubated at 37 • C and 5% CO 2 in a humified atmosphere. Melanoma cell line Mel Juso was cultivated in RPMI 1640 medium with 2% sodium bicarbonate, 10% FCS and 1% penicillin/streptomycin at 37 • C, and 8% CO 2 . Mycoplasma contamination was regularly excluded for all primary cells and cell lines. When reaching approximately 80% confluence, cells were washed with PBS and detached using a solution of 0.05% trypsin and 0.02% EDTA in PBS. After centrifugation and removal of the trypsin solution, cells were either passaged or counted using a Neubauer counting chamber. Melanoma cell line Mel Im was cultured as described in Section 2.3. Unless otherwise stated, cell culture chemicals and media were obtained from Sigma Aldrich (Steinheim, Germany). Lentiviral Transduction of Melanocytes Lentiviral transduction using a third-generation vector system was described elsewhere [17]. Briefly, HEK293T cells were seeded in 10 cm plates at a density of 2 × 10 6 cells/ plate. On the next day, three vectors were introduced simultaneously using transfection with Lipofectamine ® LTX (Thermo Fisher, Waltham, MA, USA): an envelope plasmid pHIT-G, a packaging plasmid pCMV ∆R8.2, and a target plasmid with the DNA of interest (either copGFP or B-RAF V600E ). Cells were incubated 16 h before the medium was changed to MGM-4 BulletKit medium. After additional incubation for 24 h, supernatants were collected, filtered, and applied to NHEM. Polybrene ® (Santa Cruz, Dallas, TX, USA) was added to a final concentration of 1 µg/mL to increase the efficiency of viral uptake. Due to high sensitivity of primary cells, lentiviral supernatants were removed after approximately 6 h. Cells were washed three times with PBS and cultivated in regular MGM-4 BulletKit medium. All experiments using transduced melanocytes started exactly 7 days after transduction to allow establishment of a senescent phenotype. All data in this study are derived from samples that are either untransfected or transfected with scrambled siRNAs. Different transductions are referred to as Mock (copGFP control plasmid) or BRAFm (B-RAFV600E). Induction of Senescence in Melanoma Cells Etoposide treatment started 24 h after approximately 200,000 cells/well were seeded in 6-well plates. Etoposide (R&D Systems, Minneapolis, MN, USA) was dissolved in DMSO to achieve a stock solution of 50 mM, which was then diluted in culture medium to a final concentration of 100 µM and applied to the cells. Control cells were treated with a similar amount of DMSO to exclude effects of the solvent. After an incubation period of 48 h, cells were detached as described in Section 2.1 and either collected for further processing or counted using a Neubauer counting chamber. Treatment with acidified nitrite was described recently [18]. After a treatment period of 5 min, cells were incubated for 48 h in culture medium and eventually detached and collected for further processing. For the analysis of long-time acidosis effects, melanoma metastasis cell line Mel Im was cultured in medium at pH 6.7 for at least 2 months prior to analyzation. Therefore, low-glucose DMEM was supplemented with 10% FCS and 1% penicillin/streptomycin and 0.2% sodium bicarbonate as buffer. Cells were then incubated at 37 • C and 8% CO 2 to set the desired pH value. Control cells were cultured conventionally at pH 7.4 in low-glucose DMEM including 3.7% sodium bicarbonate, supplemented with 10% FCS and 1% penicillin/streptomycin at 37 • C, and 8% CO 2 . Analysis of mRNA Expression Using Real-Time PCR Total RNA isolation was achieved using E.Z.N.A. ® Total RNA Kit II (Omega Bio-Tek, Norcross, GA, USA) according to manufacturer's instructions. Generation of cDNA was performed as previously described [19]. For real-time PCR, LightCycler ® 480 II devices (Roche, Basel, Switzerland) were used with forward and reverse primers from Sigma-Aldrich. Primer sequences can be found in Table 1. Immunofluorescent Stainings Approximately 20,000 cells were seeded on 18 mm round coverslips and incubated overnight. On the next day, cells were washed twice with PBS and subsequently fixed with 4% PFA for 10 min. Staining procedure started immediately after this fixation step, since even short-term storage was found to interfere with nuclear PML signal in our experiments. Permeabilization using 0.1% Triton-X100 in PBS for 3 min was followed by 30 min blocking with 10% BSA in PBS. The primary antibody against PML (1:200 in 1.5% BSA/PBS, Santa Cruz, sc-966) was added and incubated overnight at 4 • C. The secondary antibody (1:400 in 1.5% BSA/PBS, Thermo Fisher, A32727) was incubated for 1 h at room temperature. Cells were then stained with DAPI (1:10,000 in PBS, Sigma Aldrich) and mounted on microscope slides using Aqua-Poly/Mount (Polysciences, Warrington, PA, USA). Final stainings were analyzed using an Olympus IX83 inverted microscope in combination with Olympus CellSens Dimension software (Version 2.3, Olympus, Tokyo, Japan). DAPI staining of the stainings was used to analyze heterochromatin formation. Brightness and contrast of representative images were adjusted evenly to increase visibility of the staining. Real-Time Cell Proliferation Analysis (RTCA) Proliferation was measured using the xCELLigence System (Roche) as described elsewhere [21]. In short, approximately 3000 cells/well were seeded on specific plates and loaded into the device. Proliferation was monitored for five (Mel Juso) or nine days (NHEM) without replacing culture medium. The parameter slope describes the steepness of each curve during proliferation and was normalized to control treatment. XTT Cell Viability Assay Approximately 3000 cells/well were seeded in a 96-well plate. Cell viability was assessed after an incubation period of 7 days (NHEM) or 48 h (Mel Juso) using the Cell Proliferation Kit II (Roche) according to the manufacturer's instructions. Due to low cell density, it was not necessary to replace culture medium at any time during the incubation period. A Clariostar Plus Multiplate reader (BMG Labtech, Ortenberg, Germany) was used for photometric detection. Absorbance values were normalized to control treatment. Staining of Senescence-Associated Beta-Galactosidase Activity Quantification of β-galactosidase activity was done 7 days post transduction in NHEM and 48 h post treatment in Mel Juso. Fixation and staining were performed using the senescence β-galactosidase staining kit (Cell Signaling, Danvers, MA, USA) according to the manufacturer's instructions. After staining, cells were washed twice with PBS and stored at 4 • C for up to two weeks. An Olympus IX83 inverted microscope in combination with Olympus CellSens Dimension Software (Olympus) was used to acquire images of the stainings, which were then quantified manually using ImageJ. The same images were used to manually assess and quantify changes in cellular morphology. Brightness and contrast of representative images were adjusted evenly to increase visibility of the staining. Flow Cytometry of Fluorescent Beta-Galactosidase Substrates Activity of β-galactosidase was also quantified using fluorescent substrates in combination with flow cytometry. After treatment and adequate incubation times (see Sections 2.2 and 2.3), approximately 350,000 cells were seeded in 6-well plates exactly 12 h prior staining. The ImaGene Red ™ C 12 RG lacZ Gene Expression Kit (Molecular Probes) was used in accordance to the manufacturer's instructions. In short, staining began by addition of 300 µM chloroquine reagent in 1 mL prewarmed cell medium. After incubation for 30 min at 37 • C, 6.67 µL substrate reagent were added directly to the supernatant to achieve a final concentration of 33 µM. Cells were incubated for another 1 h at 37 • C before they were detached and collected. Following centrifugation, samples were resuspended in 1 mM PETG reagent in 1% BSA/PBS and transferred to FACS tubes. For staining with DDAO, a solution of 20 µM DDAO galactoside (Thermo Fisher) and 0.1 µM Bafilomycin A1 (Sigma Aldrich) cell culture medium was prepared and applied to the cells. Plates were then sealed with parafilm and incubated for 90 min at 37 • C. Cells were eventually washed with PBS and detached, followed by centrifugation and resuspension in 1% BSA/PBS. All samples were measured using a BD LSRFortessa™ flow cytometer in combination with BD FACSDiva ™ software (Version 8.0, BD Biosciences, San Jose, CA, USA). Statistical Analysis Analysis and visualization of experimental results was done using GraphPad Prism 9 software (Version 9.1.2, GraphPad Software Inc., San Diego, CA, USA). If not otherwise stated, at least three biological replicated were measured and statistical analysis was performed by Student's unpaired t-test. All results are normalized to the respective control treatment and shown as mean ± SEM. A critical value of p < 0.05 was considered statistically significant. RNA Markers of Senescence Quantification of gene expression displays an easy and reliable approach to assess cellular conditions, including senescence. We here used quantitative RT-PCR to detect specific mRNAs regulated in senescence of normal human melanocytes (NHEM) and melanoma cells. NHEM received lentiviral transduction of mutated BRAF V600E , which leads to oncogene-induced senescence (OIS) as initially described by Michaloglou et al. [4]. Melanoma cell line Mel Juso was treated with 100 µM etoposide, an inhibitor of topoisomerase II, to induce DNA damage and thereby trigger cellular senescence. Traditional mRNA markers of senescence include cell cycle inhibitors and members of the senescenceassociated secretory phenotype (SASP). We started with assessing cell cycle inhibitors p21 CIP1/WAF1 and p16 INK4A , which are encoded by CDKN1A and CDKN2A genes, respectively. While CDKN2A was significantly induced in both experimental settings, we detected a significant increase of CDKN1A only in melanoma cells treated with etoposide, while there was no significant effect in senescent NHEM ( Figure 1A,B). A third cell cycle inhibitor, p53 encoded by the TP53 gene, did not show any regulation on mRNA level in both systems. We further assessed gene expression of CXCL2 and CCL8, which are associated with the SASP [22]. While both markers were increased in senescent NHEM and melanoma cells, a statistical significance could only be detected for CXCL2. Protein Markers of Senescence Since mRNA markers have a number of limitations, mostly due to the possibility of translational and posttranslational regulation, we next tested for different protein markers of cellular senescence. Interestingly, aforementioned cell cycle inhibitors p16 INK4A , p21 CIP1/WAF1 and p53 are among the most common proteins for detection and quantification of cellular senescence [23]. We, therefore, used Western blot analysis to assess and compare their regulation in different experimental settings. Protein levels of all three cell cycle inhibitors doubled during OIS in NHEM (Figure 2A). Melanoma cells treated with etoposide showed a stronger increase of p21 CIP1/WAF1 , while upregulation of p53 turned out to be rather variable ( Figure 2B). Cell cycle inhibitor p16 INK4A protein, however, was absent in melanoma cells (data not shown). As several studies found an association of ERK1/2 activation with cellular senescence [18,24], we assessed this marker next. A significant increase of phosphorylated ERK1/2 was found in both senescent NHEM and melanoma cells. It is important to note that the effect in NHEM is potentially caused by overexpression of mutant BRAF V600E , an upstream kinase of ERK1/2. The melanoma cell line used in our experiments also carries a mutation upstream of ERK1/2, affecting HRAS and NRAS genes. In contrast to NHEM, however, both control and treatment cells bear these mutations, indicating that they do not attribute for the increased phosphorylation of ERK1/2 after etoposide treatment. Furthermore, we detected γH2AX as an indicator of DNA damage in melanoma cells, and found a strong but not reliable upregulation when assessing biological replicates, thereby preventing statistical significance. The same marker could not be detected in senescent NHEM (data not shown). Another molecular marker of DNA damage showed increased levels of nuclear promyelocytic leukemia pro-tein (PML), which is best assessed using immunocytochemistry. While we could detect a significant increase of nuclear PML staining in NHEM, there was only a tendency toward an upregulation in melanoma cells treated with etoposide ( Figure 2C,D). Protein Markers of Senescence Since mRNA markers have a number of limitations, mostly due to the possibility of translational and posttranslational regulation, we next tested for different protein markers of cellular senescence. Interestingly, aforementioned cell cycle inhibitors p16 INK4A , p21 CIP1/WAF1 and p53 are among the most common proteins for detection and quantification of cellular senescence [23]. We, therefore, used Western blot analysis to assess and compare their regulation in different experimental settings. Protein levels of all three cell cycle inhibitors doubled during OIS in NHEM (Figure 2A). Melanoma cells treated with etoposide showed a stronger increase of p21 CIP1/WAF1 , while upregulation of p53 turned out to be rather variable ( Figure 2B). Cell cycle inhibitor p16 INK4A protein, however, was absent in melanoma cells (data not shown). As several studies found an association of ERK1/2 activation with cellular senescence [18,24], we assessed this marker next. A significant increase of phosphorylated ERK1/2 was found in both senescent NHEM and melanoma cells. It is important to note that the effect in NHEM is potentially caused by overexpression of mutant BRAF V600E , an upstream kinase of ERK1/2. The melanoma cell line used in our experiments also carries a mutation upstream of ERK1/2, affecting HRAS and NRAS genes. In contrast to NHEM, however, both control and treatment cells bear these mutations, indicating that they do not attribute for the increased phosphorylation of ERK1/2 after etoposide treatment. Furthermore, we detected γH2AX as an indicator of DNA damage in melanoma cells, and found a strong but not reliable upregulation when assessing biological replicates, thereby preventing statistical significance. The same marker could not be detected in senescent NHEM (data not shown). Another molecular marker of DNA damage showed increased levels of nuclear promyelocytic leukemia protein (PML), which is best assessed using immunocytochemistry. While we could detect a significant increase of nuclear PML staining in NHEM, there was only a tendency toward an upregulation in melanoma cells treated with etoposide ( Figure 2C,D). Functional and Morphological Markers of Senescence A central hallmark of cellular senescence is the discontinuation of cell division and thereby proliferation. We here used real-time cell proliferation analysis (RTCA) to track cell growth over time, and found a significant decrease in both experimental settings ( Figure 3A,B). In addition, proliferative activity can be measured indirectly by incubating cells for an appropriate time period followed by cell viability analysis. Since incubation times are largely dependent on the proliferation rate of control cells, we used 7 days (NHEM) and 48 h (Mel Juso) in our experiments. An XTT assay revealed significantly reduced cell viability in BRAF V600E -transduced NHEM and etoposide-treated Mel Juso cells, hence indicating reduced proliferation ( Figure 3C,D). Some functional consequences of cellular senescence lead to morphological changes, including heterochromatin formation, flattening, and multinucleation of cells. Senescenceassociated heterochromatin foci (SAHF) were detected using DAPI staining, revealing a significant increase in both primary and melanoma cells ( Figure 3E,F). Next, we assessed flattening and multinucleation, which were both significantly elevated after treatment of Mel Juso with etoposide ( Figure 3H). NHEM, however, already had high levels of flattened cells in the control treatment, with a small but significant increase after entering OIS ( Figure 3G). Interestingly, multinucleation in NHEM was negligible, as only very few cells with two or more nuclei were found. Functional and Morphological Markers of Senescence A central hallmark of cellular senescence is the discontinuation of cell division and thereby proliferation. We here used real-time cell proliferation analysis (RTCA) to track cell growth over time, and found a significant decrease in both experimental settings (Figure 3A,B). In addition, proliferative activity can be measured indirectly by incubating cells for an appropriate time period followed by cell viability analysis. Since incubation times are largely dependent on the proliferation rate of control cells, we used 7 days (NHEM) and 48 h (Mel Juso) in our experiments. An XTT assay revealed significantly reduced cell viability in BRAF V600E -transduced NHEM and etoposide-treated Mel Juso cells, hence indicating reduced proliferation ( Figure 3C,D). Quantification of Senescence-Associated Beta-Galactosidase Activity Next, we quantified activity of senescence-associated beta-galactosidase (SA-β-Gal), which is stated to be the gold standard when it comes to detection of cellular senescence [16]. Multiple experimental approaches have been developed to reliably measure activity of beta-galactosidase in vitro, with the method of Dimri et al. [14] being the most widespread. It is based on the cleavage of X-Gal to yield a blue and insoluble dye, which can be easily detected using bright field microscopy. We used this method to quantify senescent primary and melanoma cells, and detected a significant increase compared to the respective controls ( Figure 4A,B). Next, two fluorescent substrates of beta-galactosidase were used in combination with flow cytometry, namely C 12 RG ( Figure 4C,D) and DDAO galactoside ( Figure 4E,F). While both substrates share the main feature of producing a fluorescent molecule upon hydrolysis by beta-galactosidase, their chemical and functional properties differ notably. The resorufin-based C 12 RG is not fluorescent in its inactive state and carries a lipophilic tail that integrates in the cellular membrane to anchor the fluorescent product within the cell and ensure signal stability. DDAO galactoside, on the other hand, is an intrinsically fluorescent molecule that drastically changes its excitation and emission spectra after hydrolysis. In our experiments, both molecules successfully detected senescent cells similar to the traditional X-Gal assay. The percentage of positively stained cells was comparable among all three detection methods, in both control and treatment settings. Some functional consequences of cellular senescence lead to morphological changes, including heterochromatin formation, flattening, and multinucleation of cells. Senescenceassociated heterochromatin foci (SAHF) were detected using DAPI staining, revealing a significant increase in both primary and melanoma cells ( Figure 3E,F). Next, we assessed flattening and multinucleation, which were both significantly elevated after treatment of Mel Juso with etoposide ( Figure 3H). NHEM, however, already had high levels of flattened cells in the control treatment, with a small but significant increase after entering OIS ( Figure 3G). Interestingly, multinucleation in NHEM was negligible, as only very few cells with two or more nuclei were found. Validating Selected Markers for Detection of Cellular Senescence in Melanoma Cells While primary melanocytes are well characterized even in their senescent state, malignant melanoma is one of the most highly mutated cancers and thereby comes with a high grade of heterogeneity [25]. Further, induction of senescence in therapeutic or physiological settings might be due to a broad variety of stimuli, indicating that further validation is necessary for this cell type. Based on the data described in this study, we selected a set of three markers that showed the best results in melanoma cells treated with etoposide: SA-β-Gal, p21, and morphological changes, including flattening and multinucleation. In a first step, Mel Juso cells were treated with acidified nitrite, a novel antitumor treatment previously described [18]. A significant increase of SA-β-Gal, detected via X-Gal assay, was revealed ( Figure 5A). Induction of p21 was present on both mRNA and protein level ( Figure 5B). Since acidified nitrite interfered with β-actin expression (data not shown), we used 18s mRNA as reference, as well as Ponceau S staining during protein quantification. When assessing morphological changes, we could detect a significant increase of flattened cells, while multinucleation remained scarce ( Figure 5C). To account for mutational heterogeneity of malignant melanoma, cell line Mel Im was used in addition. In contrast to Mel Juso cells, which bear a NRAS Q61L mutation but wild-type BRAF, Mel Im cells carry wild-type NRAS but mutated BRAF V600E . Further, we introduced a physiological stimulus of cellular senescence, long-term acidosis, as described recently [26]. During long-term acidosis, Mel Im cells exhibited a strong increase of SA-β-Gal staining ( Figure 5D), combined with increased p21 mRNA and protein ( Figure 5E). Similar to aforementioned treatment with acidified nitrite, we detected elevated levels of flattened cells without any relevant effect on multinucleation ( Figure 5F). for mutational heterogeneity of malignant melanoma, cell line Mel Im was used in addition. In contrast to Mel Juso cells, which bear a NRAS Q61L mutation but wild-type BRAF, Mel Im cells carry wild-type NRAS but mutated BRAF V600E . Further, we introduced a physiological stimulus of cellular senescence, long-term acidosis, as described recently [26]. During long-term acidosis, Mel Im cells exhibited a strong increase of SA-β-Gal staining ( Figure 5D), combined with increased p21 mRNA and protein ( Figure 5E). Similar to aforementioned treatment with acidified nitrite, we detected elevated levels of flattened cells without any relevant effect on multinucleation ( Figure 5F). Discussion Reliable detection and quantification of senescence have been major challenges ever since its first description decades ago. Heterogeneous cell populations, combined with a strong dependency on cell type and senescence trigger [27], increased the difficulty of developing universal molecular markers. To date, the only generally valid marker of cellular senescence is thought to be SA-β-Gal [16], while the vast majority of remaining molecules have to be carefully assessed and validated for each experimental setting. This study focusses on melanocytic systems by comparing two clinically relevant in vitro models: Primary melanocytes bearing mutant BRAF V600E to enter OIS, as commonly found in melanocytic nevi [4], and melanoma cells after treatment with chemotherapeutic agent etoposide. It thereby combines a physiological setting, in which the difference between proliferating and senescent cells is relatively small, with a therapeutic approach that compares highly proliferative cancer cells with severely damaged cells after treatment. The expected distance of control and treatment conditions, in terms of the senescent phenotype, is an important consideration to be made before selection of senescence biomarkers, as it defines the appropriate sensitivity required for the experiment. A second consideration is the quantity of molecular markers. While diagnostic and therapeutic approaches require sophisticated sets of biomarkers to detect senescent cells with great sensitivity and specificity, a reduced and simplified selection might be sufficient for most research applications. The latter will be addressed at the end of this section, as we will propose a condensed set of senescence biomarkers designated for research on melanocytes or melanoma cells. When assessing cellular senescence, cell cycle inhibitors are commonly used as mRNA and protein markers. Their importance has been extensively reviewed elsewhere [28][29][30]. Briefly, two different pathways are induced, namely Arf/p53/p21 and p16/pRb, both resulting in cell cycle arrest as reviewed by Larsson [31]. However, several limitations have to be taken into account: first, activation of cell cycle inhibitors is dynamic and depends on the cellular state. As indicated by several studies, p21 CIP1 and p53 are commonly found during the initiation of senescence, but may decline afterwards, while p16 INK4A shows delayed upregulation to stabilize senescence [32,33]. Consequently, assessing single cell cycle inhibitors might not be sufficient to detect senescent cells reliably. A second limitation is introduced by their low specificity, especially p21 CIP1 and p53, as other cellular conditions can cause a similar upregulation. This includes quiescence [34,35], apoptosis [36,37], and cellular dormancy [38]. Finally, cancer cells commonly bear mutations of cell cycle inhibitors [39], potentially interfering with their re-activation and limiting their use as a molecular marker of senescence. As pathways for induction of senescence are far from being fully understood, establishment of a senescent phenotype without activation of major cell cycle inhibitors seems reasonable. This is of special importance for malignant melanoma, as it belongs to the most highly mutated cancers [25]. The SASP is an important hallmark of cellular senescence and can be detected either by qPCR to measure mRNA levels of SASP components or via enzyme-linked immunosorbent assay (ELISA). We here used qPCR to detect levels of CXCL2 and CCL8, while many more SASP components might be equally useful [40,41]. Most of these markers, however are easily affected by cell type, senescence trigger, and cellular microenvironment [42]. Further, SASP components are reportedly increased during quiescence and even apoptosis [43,44]. Moreover, DNA damage is considered to a central mediator of cellular senescence, as it was previously linked to replicative as well as premature senescence caused by oncogene activation, cytotoxic therapy, or other triggers [45]. We assessed two members of the DNA repair response (DRR), gamma-H2AX and PML, which both turned out to be rather unreliable. A possible explanation is found in the experimental settings used in this study: gamma-H2AX is one of the first steps for recruitment and localization of DRR proteins [46], which possibly explains why it could not be detected in primary melanocytes seven days after oncogene overexpression in full senescence. The exact function and regulation of PML during DRR remains unclear, but it was found to be more stable in our experiments. However, it is important to note that PML has widespread cellular functions and is involved in several physiological and pathological processes [47,48]. Another, yet uncommon marker of senescence is phospho-ERK1/2 as an indicator of MAPK activity. Although it seems counterintuitive at first, recent studies have reported significant evidence that MAPK signaling contributes not only to proliferation, but also to cellular senescence [49,50]. Since pathways regulating ERK activation are affected by mutations in many cancers and the majority of melanomas [51], special caution should be exercised when using phospho-ERK1/2 for detection of senescence. Discontinuation of proliferation displays the most important functional consequence of cellular senescence. A common and feasible method to detect this is metabolic assays based on mitochondrial activity, including XTT assay used in this study. Such indirect measurement of proliferation has certain drawbacks, as mitochondrial activity might be affected without any further consequences on proliferation, leading to false positive results. On the other hand, senescence itself was shown to affect mitochondria [52], thereby impairing the necessary correlation of mitochondrial activity and cell count. Consequently, metabolic assays have only limited reliability when measuring proliferation rates in the context of cellular senescence. Impedance-based systems like RTCA bypass these problems by directly quantifying the coverage of specific cell culture plates, which is a result of cell count and cell size. Although senescent cells commonly show changes in morphology and size [53], such effects can easily be excluded by analyzing the slope of the proliferation curve rather than raw impedance values. Morphological changes might serve as markers of senescence on their own, with the main advantage that they do not require any processing or staining. Senescent cells are usually flattened [53], which was supported by our experiments. Multinucleation could not be found in melanocytes, but melanoma cells treated with etoposide, the reason for which is unknown. Further, there was no relevant increase of multinucleation when testing different senescence inducers and a second melanoma cell line, indicating that this parameter is not reliable. Since research on morphological changes during senescence and underlying mechanisms is sparse, critical evaluation of their use as markers of senescence is barely possible. Finally, we assessed the gold standard of senescence detection, SA-β-Gal. Its main advantages include easy detection and comparatively high, but not perfect specificity for senescent cells [54]. Melanocytes are among the very few cell types that physiologically express high levels of lysosomal β-galactosidase, the same enzyme referred to as SAβ-Gal in senescent cells [14,15]. Since its expression increases over time, it is generally advisable to use neonatal melanocytes for in vitro experiments, as it was done in this study. Furthermore, experiments including SA-β-Gal should be conducted at low passage numbers. From our experience, primary neonatal melanocytes start to show increased β-galactosidase activity after approximately 15 population doublings (equals 10 passages), which is why all experiments in this study were performed before cells had doubled 12 times (equals 8 passages). We then used two fluorescent substrates in combination with flow cytometry to measure SA-β-Gal and got results comparable to the traditional X-Gal assay, thereby confirming their validity. Flow cytometry has a number of advantages, including the possibility to analyze full samples with a consistent threshold, instead of manually analyzing a small percentage of cells and individually defining positive and negative cells. Pigmented melanocytes introduce another challenge during analysis of X-Gal stainings, as it may be difficult to distinguish between brown pigment and blue dye. The combination of fluorescent substrates and automated analysis using flow cytometry displays an easy solution to overcome such difficulties. Finally, neither staining with fluorescent substrates nor flow cytometry require fixation or preprocessing of cells. This introduces the possibility of flow cytometric sorting of cells with increased SA-β-Gal activity, as it was already done in recent studies [55,56]. After evaluating the molecular markers used in this study, it becomes evident that although the majority of them reliably detects senescence, there is no single molecule or cellular property with sufficient specificity to discriminate between senescence and other, possibly related cellular states. A combination of several markers, each with its own advantages and limitations, might display an adequate solution. As described initially, such a set of markers should be adjusted and validated for each cell type. Based on our data, we suggest two different sets of molecular markers for primary melanocytes and melanoma cells: when working with NHEM, increased SA-β-Gal activity should be the first marker and is preferentially assessed using flow cytometry. Cell cycle inhibitor p16 INK4A was found to be strongly and reliably induced, thereby rendering it the second best molecule to detect when investigating cellular senescence. As morphological and functional characteristics were somewhat variable in NHEM, we suggest to add either CXCL2 as a marker of the SASP, or PML immunofluorescence for detection of DNA damage as a third marker. In melanoma cells, SA-β-Gal activity also represents the main marker of cellular senescence, with the advantage that detection via traditional X-Gal assay is sufficient. Beside this, morphological flattening and induction of cell cycle inhibitor p21 CIP1/WAF1 should be used as additional and reliable markers. In summary, this study assessed a variety of senescence markers in two different melanocytic systems. We found most of them working reliably, but critical evaluation of their capabilities and drawbacks highlighted the importance of elaborate combined solutions. Finally, we proposed a set of up to three molecular markers for primary melanocytes and malignant melanoma cells to ensure reliable detection of cellular senescence in vitro. Our data supports the ongoing discussion and potentially improves senescence detection, until novel and sophisticated molecular markers are found.
7,830.2
2022-04-28T00:00:00.000
[ "Medicine", "Biology" ]
Modeling Impact and Cost-Effectiveness of Increased Efforts to Attract Voluntary Medical Male Circumcision Clients Ages 20–29 in Zimbabwe Background Zimbabwe aims to increase circumcision coverage to 80% among 13- to 29-year-olds. However, implementation data suggest that high coverage among men ages 20 and older may not be achievable without efforts specifically targeted to these men, incurring additional costs per circumcision. Scale-up scenarios were created based on trends in implementation data in Zimbabwe, and the cost-effectiveness of increasing efforts to recruit clients ages 20–29 was examined. Methods Zimbabwe voluntary medical male circumcision (VMMC) program data were used to project trends in male circumcision coverage by age into the future. The projection informed a base scenario in which, by 2018, the country achieves 80% circumcision coverage among males ages 10–19 and lower levels of coverage among men above age 20. The Zimbabwe DMPPT 2.0 model was used to project costs and impacts, assuming a US$109 VMMC unit cost in the base scenario and a 3% discount rate. Two other scenarios assumed that the program could increase coverage among clients ages 20–29 with a corresponding increase in unit cost for these age groups. Results When circumcision coverage among men ages 20–29 is increased compared with a base scenario reflecting current implementation trends, fewer VMMCs are required to avert one infection. If more than 50% additional effort (reflected as multiplying the unit cost by >1.5) is required to double the increase in coverage among this age group compared with the base scenario, the cost per HIV infection averted is higher than in the base scenario. Conclusions Although increased investment in recruiting VMMC clients ages 20–29 may lead to greater overall impact if recruitment efforts are successful, it may also lead to lower cost-effectiveness, depending on the cost of increasing recruitment. Programs should measure the relationship between increased effort and increased ability to attract this age group. History of VMMC implementation and strategy in Zimbabwe In 2011, the World Health Organization (WHO) and the Joint United Nations Programme on HIV and AIDS (UNAIDS) released the "Joint Strategic Action Framework to Accelerate the Scale-Up of Voluntary Medical Male Circumcision for HIV Prevention in Eastern and Southern Africa 2012-2016" [1]. This document set a target of scaling up male circumcision (MC) to cover 80% of males ages 15-49 by 2016 in 14 countries with generalized HIV epidemics and low prevalence of male circumcision. Following the global guidance from WHO and UNAIDS, Zimbabwe conducted a series of epidemiological modeling exercises to estimate the impact of scaling up voluntary medical male circumcision (VMMC) on transmission of HIV in Zimbabwe. Analyses conducted in 2008 estimated that the greatest reduction in incidence for the medium term would come from circumcising men ages [20][21][22][23][24][25][26][27][28][29], and that in the long term, circumcising infants or adolescent boys (younger than 19 years of age) would result in the largest impact [2,3]. In response to this and additional costing and modeling conducted in 2010 [4], the Zimbabwe Ministry of Health set a target to scale up VMMC to 1.3 million males ages 13-29 between 2008 and 2015 [5]. As of June 2015, the national VMMC program had circumcised 509,753 males, of whom 441,133 were ages 13-29, representing 34% of the national VMMC target of 1.3 million males in this age group. The circumcisions conducted by the end of 2014 in Zimbabwe were projected to avert 6,300 HIV infections by 2025, with coverage estimated at 19% among 10-to 29-yearolds and 18% among 15-to 29-year-olds [6]. Age-specific analyses of VMMC in the fourteen priority countries As implementers rolled out VMMC programs in Zimbabwe and the other VMMC priority countries in eastern and southern Africa, it became evident that recruiting adolescents ages 10-19 was much easier than recruiting adult men for VMMC, and that few men over the age of 35 were accessing VMMC services [7,8]. In some communities where traditional circumcision is practiced, circumcision during adolescence is normative [9], whereas circumcision for older men can be seen as shameful [10]. In addition, barriers to circumcision that exist for older men, such as time away from work, are less of an issue for adolescents [11]. Partly as a result of these observations, our team developed the Decision Makers' Program Planning Tool (DMPPT), Version 2.0 to examine the impact and cost-effectiveness of circumcising different age groups of males [12]. The five DMPPT 2.0 country application papers in this collection [13][14][15][16][17] all compare hypothetical scenarios in which MC coverage is scaled up to 80% in various age groups. The overall conclusions of these papers, similar to the outcomes of modeling conducted previously for Zimbabwe [2,3], are that the greatest immediate impact (largest reduction of HIV incidence over five years) can be achieved by circumcising males ages 20-29 or 20-34, and that these age groups are important for efficiency (number of VMMCs per HIV infection averted) and costeffectiveness (cost per HIV infection averted) of the program. Having reviewed these analyses, international donors are promoting increasing coverage among males ages 20-29, to increase the impact and cost-effectiveness of the program. However, the analyses upon which these policy recommendations were based assumed that it would be possible to achieve 80% coverage in each age group, and that the unit cost of VMMC (the cost per VMMC, including the direct cost of the circumcision and all associated program costs, including demand creation, facilities, management, quality assurance, etc.) would be the same regardless of client age. To the contrary, implementation experience suggests that reaching 80% coverage among men over age 20 will be difficult without increased demand creation focused on this age group, combined with potentially costly changes in program implementation, such as evening and weekend hours and other approaches to make VMMC services more adult-friendly. In addition, if the efforts to increase numbers of clients in this age group are not as successful as hoped, services may be underutilized, leading to even higher costs per circumcision [18]. The effectiveness of these program innovations is untested. There are no data to show whether or not increasing investment in attracting adult men to VMMC will succeed, and if it does, to what extent. Thus, international donors' identification of males ages 20-29 as the age group having the greatest and most costeffective impact on HIV prevention may require refinement based on implementation realities. Exploring cost-effectiveness of increasing coverage among men ages [20][21][22][23][24][25][26][27][28][29] Zimbabwe VMMC program trends suggest that if the VMMC program in Zimbabwe continues without any specific efforts to attract older men, coverage will likely plateau at a lower level for the men ages 20 and above. We used data on coverage trends by age for the national VMMC program in Zimbabwe to make predictions about the level at which coverage would plateau in the different age groups: the base scenario for our analyses. We projected the impact and costeffectiveness of this scenario, which we considered more realistic than previous explorations that assume it is possible to reach 80% coverage in any age group without any additional investment in attracting the older men to services. We then assessed the primary research question in this paper: How would expending greater effort (and therefore funds) to increase coverage among clients ages 20-29 affect the impact and cost-effectiveness of the VMMC program? Because no data about the relationship between cost and increasing coverage by age group are available, we created modeling scenarios to explore this question based on arbitrary assumptions about how increased effort may lead to increased coverage. We contrast these more realistic scenarios with one that assumes that reaching 80% coverage among all males ages 10-29 can be achieved with the same VMMC unit cost across all age groups (similar to the analyses conducted in the other countries). Methods DMPPT 2.0 model designed to analyze the effects of age at circumcision on program impact and cost. The DMPPT 2.0 model tracks the number of circumcised males in newborns and in each five-year age group over time, taking into account age progression and mortality. The model calculates discounted VMMC program costs and HIV infections averted in the population each year in a user-specified VMMC scale-up strategy, compared with a counterfactual scenario in which the MC prevalence remains unchanged. The MC prevalence prior to initiation of the VMMC program ("baseline MC prevalence") is assumed to be due to traditional or other circumcisions that continue to be conducted at the same rate after VMMC program initiation. Zimbabwe data sources The DMPPT 2.0 model is populated with population, mortality, and HIV incidence and prevalence projections from an external source. For the Zimbabwe country application, we used the national Spectrum/AIM model [19], which projects population size, mortality, and HIV prevalence and incidence based on data empirically collected from the country. Population by age and year, mortality by age and year, annual number of male births, and HIV incidence by age and year were exported from this Spectrum/AIM file into a national Zimbabwe DMPPT 2.0 file. Numbers of VMMCs conducted in the country each year through December 2014, disaggregated by client age, were provided by the Zimbabwe Ministry of Health and Child Care (MOHCC) on October 18, 2015 (S1 Table). The MC prevalence by age group in the model base year (2014) was derived from the Zimbabwe Demographic and Health Survey 2010-2011 [20]. The unit cost of VMMC used in the analysis started at US$109 (all subsequent references to currency are in U.S. dollars), based on Zimbabwe MOHCC data [5]. Impact of scaling up MC coverage to 80% in each age group To estimate the impact of scaling up MC to 80% coverage in separate five-year age groups, the model scaled up MC coverage within each individual five-year age group from the baseline level in 2014 to the target in 2018 while holding coverage constant at the baseline MC prevalence for the other age groups. The scale-up applied a linear interpolation between the baseline MC prevalence for each age group in 2014 and the 80% target coverage in 2018. After 2018, the coverage for each age group was maintained at 80%. For more details on the methods behind the results in this paper surrounding modeled incidence reduction from providing VMMC to individual age groups, and modeled impact of scaling up VMMC to progressively lower age groups, see Kripke, Chen, and Vazzano, et al. [15]. Calculation of MC coverage from VMMC program data The remainder of this paper is based on scenarios derived by projecting historical trends in MC coverage by age group and year from Zimbabwe's VMMC program. Total MC coverage in year t and age group a (C ,a,t ) is given by the baseline MC prevalence for age group a plus the MC coverage resulting from the VMMC program for age group a: Male circumcision coverage resulting from the VMMC program was calculated from the age-disaggregated VMMC program statistics presented in S1 Table as follows: where (C VMMC,a,t ) is the MC coverage in age group a in year t resulting from the VMMC program, (M VMMC,a,t ) is the number of males in age group a in year t who at some point in time had been circumcised through the VMMC program, and (P a,t ) is the number of males in the population in age group a in year t. M VMMC,a,t is calculated as follows: The number of circumcised males in age group a at the beginning of year t was the number at the beginning of the previous year t-1, plus new VMMCs performed during year t-1 and net VMMCs aging into and out of group a, all adjusted for mortality. The increase in MC coverage in year t (I t ) is given by: Derivation of scenarios from trends in coverage resulting from the VMMC program To assess the impact, cost, and cost-effectiveness of increasing MC coverage among males ages 10-29 above what historical program trends would indicate, we used four scenarios. The first scenario assumed scale-up to 80% coverage among males ages 10-29, with the same unit cost ($109) for each five-year age group. Scale-up was linear from baseline levels in 2014 to 80% in 2018 and was maintained at 80% thereafter. The base scenario was created as follows: For each age group (10-14, 15-19, 20-24, 25-29, and 30-49), we plotted the annual increase in MC coverage, calculated from the VMMC program data as described above, for each year from 2011 to 2014. We plotted the annual increase in MC coverage rather than the annual MC coverage, because we observed that the VMMC program was expanding exponentially each year, and we wished to project the level at which MC coverage would plateau in each age group if the program continued to expand at the same rate in each age group until it reached saturation, which we defined as 80% MC coverage among males ages 10-19. An exponential trend line was fit in Microsoft Excel to the data for each age group (S1 Fig). S2 Table shows the formulas and R 2 values for each trend line. In the base scenario, the increases in MC coverage for 2015-2018 for each age group were projected based on the indicated equation, with maximum possible MC coverage set to 80%. Based on this projection, the 10-14 and 15-19 year age groups reached 80% coverage by the end of 2017. We assumed that once the program reached 80% coverage of 10-to 19-year-olds, it would no longer be possible to increase MC coverage in the older age groups without additional demand creation and program modifications specifically targeted to these age groups. Therefore, in the base scenario, target coverage in the model from 2018 on was maintained constant for all age groups based on the coverage achieved by the end of 2017 (Fig 1). In the DMPPT 2.0 model, if the actual MC coverage in any given year is higher than the target coverage, no circumcisions are done in that age group. Therefore, after 2017, modeled circumcisions continued in the 10-14 year age group to maintain the 80% coverage, and they ceased among males ages 15 and older. Increases in coverage among males ages 15 and older after 2017 occurred as a result of circumcised males aging in from the younger age groups, but these are not shown in Fig 1, which shows the target, not actual coverage after 2014. Scenarios A and B were created as follows: The coverage targets for the 10-14 and 15-19 year age groups were the same as in the base scenario. The annual increases in coverage in the 20-24 year age group were twice that of the base scenario for the years 2015-2016. Accordingly, the unit cost for the 20-24 year age group was double that used in the base scenario, because we made the arbitrary assumption that doubling the unit cost would double the increase in MC coverage in each year, up to a maximum total coverage of 80%. Coverage for the 20-24 year age group plateaued at 80% by the end of 2017. The annual increases in coverage in the 25-29 year age group, as well as the unit cost for this age group, were double that of the base scenario for Scenario A and triple that of the base scenario for Scenario B for the years 2015-2016. Coverage for the 25-29 year age group plateaued at 80% by the end of 2017 for Scenario B. Table 1 summarizes these four scenarios. The Impact of reaching 80% MC coverage in each age group Prior analyses in other countries [12] projected the impact and cost-effectiveness of scaling up to 80% MC coverage in various age groups. Age distribution of VMMC clients in Zimbabwe through 2014 The age distribution of VMMC clients in Zimbabwe has been biased toward adolescents. Clients ages 10-19 have been coming in at a higher rate and clients ages 20 and older have been coming in at a lower rate than would be expected based on the age distribution of uncircumcised men (Fig 3). Although the program aimed to circumcise males ages 13-29 according to the national strategy, 70% of VMMC clients in the Zimbabwe VMMC program through 2014 were between the ages of 10 and 19. As of the end of 2014, Zimbabwe had reached an estimated 21% MC coverage (including both traditional circumcisions and VMMCs) among 10-to 14-year-olds, 22% coverage among 15-to 19-year-olds, 16% coverage among 20-to 24-yearolds, and 14% coverage among 25-to 29-year-olds. These coverage levels represented increases of 17%, 17%, 8%, and 4% in coverage over baseline levels for these four respective age groups [6]. If these trends continue, the VMMC program will reach 80% coverage among adolescents ages 10-19 long before reaching this level of coverage among adults ages 20-29. Furthermore, the relative proportion of clients ages 10-14 has been increasing over time, whereas the proportion of clients over age 20 has been decreasing. The proportion of clients ages 15-19 appears to have stabilized in the past four years (Fig 4). MC coverage scenarios based on program trends through 2014 Given trends in the distribution of VMMCs by age group, we projected that implementers would not reach 80% MC coverage among the age groups above age 19 without significant additional effort and cost to recruit adults to access VMMC services. We asked at what level of MC coverage each age group would plateau if current trends continue. We assumed that as services continued to expand throughout the country, demand would saturate in each age group either when MC coverage reached 80% in a given age group or when MC coverage reached 80% in the 10-19 year age group, whichever came first (Fig 1). The assumption behind this was that this pattern would be followed within each VMMC catchment area as services were rolled out across the country, and this assumption became the base scenario for the analyses. We then hypothesized that it would be possible to increase coverage of the 20-29 year age group, by putting increased effort and resources into interventions tailored to this age group. In the absence of costing data by age segment, we made an arbitrary assumption that a linear relationship exists between the unit cost for clients ages 20-29 (a proxy for effort) and the percentage of the target population ages 20-29 circumcised each year, such that doubling the VMMC unit cost would double the percentage of the target population circumcised, and so on. Thus we created two additional scenarios, as Table 1 shows in detail. Fig 5 shows the reduction in HIV incidence that can be achieved by scaling up to the scenario targets from Table 1 in each individual age group by 2018 (as with Fig 2, each line represents a hypothetical scenario in which only males of that age group are circumcised). Unlike in Fig 2, in which MC coverage was scaled up to 80% in each age group by 2018, in the base scenario, the greatest reduction in HIV incidence after 15 years (by 2029) is achieved by circumcising clients ages 15-19 and 20-24. As coverage increases among males ages 20-24 from the base scenario to Scenario A, the impact of circumcising that age group increases, and it initially surpasses the impact of circumcising the 10-14 and 15-19 year age groups. When the program circumcises clients ages 25-29, the impact progressively increases as coverage is scaled up to 42%, 69%, and 80% coverage in the base, A, and B scenarios, respectively. Nonetheless, the impact of circumcising males ages 25-29 plateaus at a lower level than it does with the younger age groups, even in Scenario B, where MC coverage reaches 80%. While Fig 5 shows the reduction in HIV incidence that can be achieved by circumcising each individual five-year age group, this is not the same as the contribution of each age group to the reduction in incidence when broader age bands are circumcised. Therefore, it is useful to assess the contribution of each age group when males across the entire 10-34 age range are circumcised, and the coverage plateaus for each age group at the level specified in each scenario. Fig 6 compares the impact of adding progressively lower age groups for each of the three scenarios, showing the relative contribution of each age group to the reduction in incidence in each scenario. In the base scenario, the greatest reduction in HIV incidence is contributed by including the 20-24 year age group, because the distance between the 20-34 and 25-34 lines is greatest. In Scenarios A and B, when the 25-29 year age group is scaled up to 69% and 80% coverage, respectively, this age group makes the greatest contribution to reducing HIV incidence, as evidenced by the fact that, for these scenarios, the distance between the 25-34 and 30-34 lines is the greatest. This demonstrates that the contribution of each age group to the impact on HIV incidence partially depends on the level of coverage that can be achieved in that age group. The male HIV incidence is highest in the 25-29 and 30-34 year age groups (S2 Fig). This is why even when coverage in the 25-29 year age group is less than that of the younger age groups (such as in Scenario A, when coverage in this age group is 69% and coverage among those ages 10-24 is 80%), the 20-to 29-year-olds can provide the greatest contribution to HIV incidence reduction. Inclusion of the 10-14 year age group provides a negligible additional HIV incidence reduction, as the majority of these clients are not sexually active and would therefore benefit just as much by being circumcised between ages 15 and 19. Table 2 shows the cost and impact of the various scenarios. The base scenario has the lowest total cost and lowest number of HIV infections averted. The 80% scenario has the lowest numbers of VMMCs per HIV infection averted and the lowest cost per HIV infection averted. In comparison with the base scenario, Scenarios A and B have progressively higher total cost, total HIV infections averted, and cost per HIV infection averted and progressively lower numbers of VMMCs per HIV infection averted. No data are currently available about the relationship between cost and increased coverage for a specific age group, so the unit cost assumptions used in Scenarios A and B in Table 2 are arbitrary. Therefore we conducted a sensitivity analysis in which the unit cost in Scenario A was multiplied by a series of factors compared with the unit cost of the base scenario, and Table 2 did not change in the sensitivity analysis.) Table 3 shows the results of this sensitivity analysis. If no additional cost per VMMC is required to increase coverage among males ages 20-29 (unit cost multiplier 1; unit cost $109), the total cost of Scenario A is $420 million-the same as for the 80% scenario, but with fewer HIV infections averted, because Scenario A does not reach 80% coverage among males ages 25-29. (The reason the total cost is the same is that the rate of scale-up for the 80% scenario is linear, while that of Scenario A is exponential, and the two scenarios just happen to end up with roughly the same total number of VMMCs throughout the scale-up phase.) The cost per HIV infection averted for Scenario A in this case is $5,400, which is higher than that of the 80% scenario but lower than that of the base scenario. When the cost per VMMC for males ages 20-29 is 50% higher than in the base scenario (unit cost multiplier 1.5; unit cost $163), the cost per HIV infection averted in Scenario A, $6,000, is the same as in the base scenario, and the total cost of Scenario A, $468 million, is higher than that of the 80% scenario. If the unit cost required to increase coverage among males ages 20-29 is more than 50% higher than that of the base scenario, the cost per HIV infection averted is higher than that of the base scenario. Limitations This study has several limitations. The general limitations of the DMPPT 2.0 model are discussed in the methods paper in this collection [12] and they also apply to this analysis. The primary limitations of this particular analysis are the assumptions about the level at which male circumcision coverage would plateau in each age group, and the fact that the actual relationship between cost and increased coverage by age group is unknown. Pilot programs to recruit greater numbers of clients over the age of 20 are under way not only in Zimbabwe (as described) but also in South Africa and Tanzania. It will be crucial for these programs to collect data about the relationship between program cost and increased recruitment among this age group. Discussion In this paper, we predicted that VMMC coverage in the age groups above age 19 would plateau at a lower level than in the younger age groups, based on historical trend data from the VMMC program and knowledge of the sociocultural factors that make VMMC more desirable to adolescents. If our prediction turns out to be the case, the relative contribution of each age group to HIV incidence reduction will be based on a combination of the level of coverage at which the program plateaus for each age group and the HIV incidence in that age group and the next higher age group. In the base scenario explored in this paper, the greatest contribution to HIV incidence reduction comes from circumcising males between the ages of 15 and 19. Increasing coverage among males ages 20-24 and 25-29 increases the contribution of these age groups to HIV incidence reduction. No data have been published or are otherwise available showing how much it costs to increase coverage among males ages 20 and above. We hypothesized that it would be possible to increase coverage of the 20-29 year age group by putting increased effort and resources into demand creation interventions tailored to this age group, such as outreach to specific locations The unit cost of circumcising men ages 20-29 in the base scenario ($109) was multiplied by the indicated unit cost multiplier for these age groups in Scenario A. All other parameters for Scenario A remained the same as those shown in Table 1. where older men (20)(21)(22)(23)(24)(25)(26)(27)(28)(29) convene. In addition, increased uptake of services among this older age group would likely entail structural changes at service delivery sites to make them more attractive to the older males, such as "VIP services, " offering extended hours, and separate transport and waiting areas for adults, who are often embarrassed to mix with adolescents. Additional formative research could be conducted to tailor campaigns to the factors that motivate the older males in their specific contexts, including socioemotional factors that may move them to action. In the absence of actual data about the cost required to increase coverage among 20-to 29-year-olds, we conducted an analysis using an arbitrary assumption that a linear relationship exists between the age-specific unit cost and the percentage of the target population ages 20-29 circumcised in each year, for the sake of exploring what might happen. Given this assumption, our analysis demonstrated that increased coverage among these age groups would lead to higher costs per HIV infection averted (lower cost-effectiveness) of the VMMC program than did scenarios that assumed the cost to reach men ages 20-29 would be the same as the cost to reach adolescents ages 10-14. In Scenarios A and B, in which increased resources were used to increase coverage among the population ages 20-29 compared to the base scenario, the number and percentage of HIV infections averted, total cost, and cost per HIV infection averted were progressively higher than in the base scenario, while the numbers of VMMCs per HIV infection averted were progressively lower. A sensitivity analysis in which the unit cost for circumcising males ages 20-29 was varied for Scenario A showed that the cost per HIV infection averted was equal in the base scenario and Scenario A when the unit cost of circumcising males ages 20-29 in Scenario A was 1.5 times that of the base scenario. Below this threshold, the cost per HIV infection averted was lower in Scenario A than in the base scenario, and vice versa. Therefore, if additional resources are available to increase coverage among men ages 20-29, it might be possible to increase the overall impact of the VMMC program, and doing so could also make the program either more or less cost-effective. The actual relationship between cost and increased recruitment of clients in the 20-to 29-year age group will determine whether these efforts make the program more cost-effective or less cost-effective, so collecting these data is crucial. Zimbabwe is trying to increase representation of men ages 20-29 among VMMC clients. In order to scale up demand and encourage men, especially those 20-29 years old, to get circumcised, the country began a comprehensive research program in 2015 to map a man's journey to circumcision and identify points at which interventions could be strategically placed to increase demand for circumcision. Based on this research, the Government and implementing partners developed a detailed communications and marketing strategy targeting specific segments of men. In 2016, the country began implementing that strategy, accompanied by costing studies. The data collected can shed insights on the feasibility of this strategy and inform further analyses about cost-effectiveness of increasing demand among 20-to 29-year-old men. The modeling presented here demonstrates that cost-effectiveness analyses of age targeting by VMMC programs need to take two related factors into account that were not considered previously: (1) the feasibility of reaching high levels of coverage among specified target age groups, and (2) if reaching these high levels of coverage is possible at all, how much that will cost, in comparison with the costs of programs as currently implemented. Zimbabwe is modifying its VMMC program to attract more males ages 20-29. Even if the country's age-specific recruitment efforts are successful, increasing the program's overall impact on HIV incidence, the added investment required may also lead to lower cost-effectiveness. Data collected from this program in Zimbabwe and similar efforts in other countries can be applied to cost-effectiveness analyses to inform future age-specific VMMC strategies in Zimbabwe and in other VMMC priority countries. S1 Table. Number of VMMCs Conducted in Zimbabwe, Disaggregated by Age Group and Year. Note: Data were obtained from national program records. Because VMMCs for ages 30-49 were not available disaggregated by five-year age groups, they were disaggregated based on the age distribution of circumcisions conducted in Malawi in PEPFAR FY 2013, based on PEP-FAR program data. VMMCs for ages 50+ were put into the 50-54 age group in the DMPPT 2.
7,131.6
2016-10-26T00:00:00.000
[ "Economics", "Medicine" ]
Charge Transfer in InAs@ZnSe-MoS 2 Heterostructures for Broadband Photodetection Absorbing near-infrared (NIR) photons, with longer wavelengths, in atomically thin monolayer MoS 2 presents a significant challenge due to its weak optical absorption and narrow absorption bands. Consequently, MoS 2 -based photodetector devices often experience low responsivity and a limited detection window. Herein, a novel InAs@ZnSe core@shell/1L-MoS 2 heterostructure, leveraging InAs@ZnSe as the primary infrared-absorbing material and exploiting the formation of a type-II heterostructure is showcased. Steady-state and time-resolved spectroscopy, along with optoelectronic characterization, are employed to investigate photo-induced charge transfer dynamics. The results show efficient hole transfer to InAs@ZnSe upon excitation of both materials. Instead, with selective excitation of InAs@ZnSe, electron transfer is observed from InAs@ZnSe to the 1L-MoS 2 . The heterostructure demonstrates a broadband photoresponse spanning the wavelength range of 300 to 850 nm, exhibiting a Responsivity of ≈ 10 3 A/W and Detectivity of ≈ 10 11 Jones. The signal-to-noise ratio substantially increases by 3 to 4 orders of magnitude for 700 and 850 nm excitation compared to pristine 1L-MoS 2 . The enhancement in photoresponse and signal-to-noise ratio is attributed to increased absorption, which helps eliminate defect and trap states, thereby promoting the photogating effect. Introduction [8] For example, monolayer MoS 2 has a direct band gap, whereas the multilayer form exhibits an indirect band gap. [9]Monolayer TMDCs possess unique electronic and optical properties, which have paved the way for numerous applications in optoelectronics, photonics, and energy storage devices. [1,2,10][13] Consequently, photodetector devices based on TMDCs frequently exhibit very low photoresponse and a limited detection window due to their bandgap (for MoS 2 , this threshold is 680 nm). [14,15]everal strategies have been documented in the last few years to increase optical absorption and extend the light detection window. [16,17]These strategies mainly include coupling of TMDCs with plasmonic nanomaterials, strain engineering, and coupling TMDCs with low-dimensional materials.][18][19][20] However, the optical absorption of the plasmonic nanomaterials remains a major challenge, limiting the detection window. [13,18,21,22]Strain engineering also helps to increase the detection wavelength.Recently, Liu et al. proposed the strain-plasmonic coupling effect with MoS 2 , enabling detection up to 740 nm with a very high signal-to-noise ratio (650%) compared to pristine MoS 2 . [23]oupling TMDCs with low-dimensional materials involves using the latter as photo-absorbing materials to enhance absorption and detection wavelength. [24,25]For instance, Kufer et al. proposed a hybrid 2D-0D (MoS 2 -PbS) structure, [26] and their device exhibited record responsivity >10 5 higher than what can be achieved individually by PbS quantum dots and MoS 2 -based photodetectors. [26]However, in this configuration, a very high gate voltage (−100 V) was applied along with the use of toxic PbS. [26]Similarly, Sahatiya et al. proposed a 2D MoS 2 -carbon quantum dot hybrid structure for broadband photodetection, achieving detection up to 780 nm with observed device photoresponsivity in the order of mA/W. [27][30] For example, Pal et al. [29] have demonstrated wafer-scale MoO 3 /MoS 2 /Si heterojunctions for wavelength-selective photodetection applications in the spectral range of 400-700 nm.This was achieved using variable-sized MoO 3 /MoS 2 colloidal core-shell quantum dots (QDs) and adjusting the oxide shell-to-core thickness ratio. [28,29][33][34] However, it is important to note that both 0D and 2D materials have limitations in optical absorption, quantum efficiency, and other parameters. [35,36][37][38][39] Recently, Ning et al. [40] reported computational investigations of the interfacial InAs/MoS 2 heterostructure (HS).They proposed a larger redistribution of charges, a strong interfacial interaction, and an improved absorption of light in InAs/MoS 2 heterostructure compared to only MoS 2 . [40]However, to the best of our knowledge, experimental work on InAs/MoS 2 heterostructure has not been reported, especially concerning InAs nanocrystals and MoS 2 monolayers.In this work, we fabricated an InAs@ZnSe/MoS 2 HS using mechanically exfoliated monolayer MoS 2 (monolayer MoS 2 referred as 1L-MoS 2 )and InAs@ZnSe colloidal core@shell nanocrystals (henceforth referred to as "InAs@ZnSe NCs").Coupling of InAs@ZnSe NCs with MoS 2 results in a charge transfer between both materials upon optical excitation of either single or both materials.From our findings, supported by the reported literature, [39] we could confirm a type II band-alignment of the heterostructure.We employed steady-state, time-resolved spectroscopy, and photocur-rent measurement techniques to probe the photo-induced charge transfer processes.The results show hole transfer to InAs@ZnSe NCs upon excitation of both materials and with selective excitation of InAs@ZnSe NCs, electron transfer from InAs@ZnSe NCs to the MoS 2 .The InAs@ZnSe/ MoS 2 HS is implemented, which exhibits a broadband photoresponse from 300 to 850 nm with a maximum responsivity of 4276 A/W and detectivity (D * ) 10 11 Jones for the InAs@ZnSe/MoS 2 HS at a wavelength of 700 nm, in comparison to pristine 1L-MoS 2 .The signal-to-noise ratio increases by 3 to 4 orders of magnitude for 700 and 850 nm excitation compared to monolayer MoS 2 .The increment in detection range, higher responsivity compared to the single layer MoS 2 , and improvement in the signal-to-noise ratio are attributed to the photoinduced interaction of InAs@ZnSe NCs with MoS 2 , which broaden the absorption window of 1L-MoS 2 , thus increasing the overall absorption and the removal of defect and trap states, promoting the photogating effect.Our study highlights the potential applications in optoelectronic devices, with fundamental photophysics providing a pathway to integrate 2D semiconductors with 0D InAs NCs materials for future infrared technologies. Result and Discussion InAs@ZnSe core@shell nanocrystals (InAs@ZnSe NCs) were synthesized using the experimental procedure recently reported by our group. [38,41]In detail, InAs@ZnSe NCs were produced by employing ZnCl 2 as an additive, tris(dimethylamino)arsine as the As precursor, and alane N,N-dimethylethylamine as the reducing agent.The reaction was performed at 300 °C for 15 min, after which it was quenched, and the shelling process was carried out in situ by adding tri-n-octylphosphine-Se (1 m) and heating up the system at 300 °C for 10 min.As synthesized InAs NCs are ≈ 6 nm in size, based on transmission electron microscopy (TEM) analysis (Figure 1a).The details regarding the shell thickness and chemical composition are provided in Table S1 (Supporting Information Section 1) and are also reported in our previous work [38,41] InAs NCs exhibit excitonic absorption at ≈835 nm and emission at ≈935 nm (Figure 1b).[44] A detailed description of the exfoliation process is provided in Figure S1 (Supporting Information Section 1.1), along with the overall device fabrication procedure.The Au-assisted exfoliation technique offers a broader platform for achieving large-area exfoliation of 2D materials. [45]he optical image of a large area of MoS 2 is shown in Figure 1c, where the monolayer MoS 2 is marked in a white circle and represented as 1L-MoS 2 , and only the substrate SiO 2 /Si is shown in a yellow dotted line for clarity.The 1L-MoS 2 is not present everywhere; the darker contrast indicates regions with few or multiple layers of MoS 2 (indicated by a black arrow).Additionally, we performed Atomic Force Microscopy (AFM) to measure the thickness of the monolayer, which was found to be ≈0.9 nm (Figures S2a,b Supporting Information) and is in good agreement with previous work. [46,47]he colloidal InAs@ZnSe NCs were spin-coated onto the MoS 2 to form a uniform single layer of InAs@ZnSe NCs on top of both 1L-MoS 2 and the substrate (SiO 2 /Si).The SEM image displays an area covered with InAs@ZnSe nanocrystals shaded in grey, while the InAs@ZnSe/MoS 2 heterostructure (HS) is marked in yellow for clarity.e) Raman spectra of MoS 2 (black curve) and InAs@ZnSe/MoS 2 (grey) HS samples under 532 nm excitation (power 0.3 mW) with the same conditions.f) Schematic illustration of InAs@ZnSe NCs and 1L-MoS 2 HS, along with energy level diagrams of 1L-MoS 2 and InAs@ZnSe NCs before contact, representing the type II band alignment. Figure 1d shows a scanning electron microscope (SEM) image of InAs@ZnSe/MoS 2 heterostructure (HS), with an area covered by a uniform layer of InAs@ZnSe NCs shaded in grey, while the InAs@ZnSe/MoS 2 HS is marked in yellow.To estimate the thickness of the InAs@ZnSe/MoS 2 HS, we performed AFM measurements, which revealed a thickness of ≈10 nm, as shown in Figures S2c,d (Supporting Information).For other high-resolution SEM images of InAs@ZnSe/MoS 2 HS with discussion provided in Figure S3a-f (Supporting Information, Section 1.3).It is crucial to ensure that the heterostructure contains minimal or no non-interacting InAs@ZnSe NCs, as their presence can adversely affect the desired properties and performance.Raman spectroscopy with the high spectral resolution was performed on pristine1L-MoS 2 and the InAs@ZnSe/MoS 2 HS under identical conditions (i.e., employing a 532 nm laser excitation).Two characteristic Raman peaks, E 1 2g, and A 1g , arise from the in-plane and out-of-plane modes of MoS 2 , as illustrated in Figure 1e.The separation between E 1 2g and A 1g for pristine MoS 2 is ≈18 cm −1 (Figure 1e), further confirming the monolayer nature of the MoS 2 flake and agrees well with the previous report. [43,48,49]The peak position of the E 1 2g mode for pristine MoS 2 and InAs@ZnSe/MoS 2 HS is ≈386.47 and 386.12 cm −1 , respectively, while for A 1g , it is 404.20 cm −1 for MoS 2 and 404.40 cm −1 for InAs@ZnSe/MoS 2 HS.In the case of the InAs@ZnSe/MoS 2 HS, both the A 1g and E 1 2g peaks exhibit no significant shift, indicating that the crystalline nature of the 1L-MoS 2 is preserved.A small fluctuation in the A 1g mode (Figure 1e, HS spectra) is noted, but it is within the instrument's margin of error. [49]The integration of InAs@ZnSe NCs with 1L-MoS 2 could lead to a type II energy band alignment, attributed to the differences in work functions for 1L-MoS 2 [50] and InAs@ZnSe NCs [39,[49][50][51][52] as illustrated in the schematic shown in Figure 1f.The InAs@ZnSe/MoS 2 HS were then investigated using steady-state photoluminescence (PL) spectroscopy in ambient conditions.InAs@ZnSe NCs show PL ≈ 1.5 eV (950 nm) as shown in Figure 2a blue curve and 1L-MoS 2 exhibits photoluminescence (PL) with two distinct peaks, as shown in the blue curve of Figure 2b.The lower energy peak at 1.82 eV (670 nm) is the A exciton, originating from the direct electron transition from the valence band maximum to the conduction band minimum at the K or K' point of the Brillouin zone.The higher energy peak at 1.95 eV (630 nm) is the B exciton, resulting from the electron transition from the spin-orbit split valence band maximum to the conduction band minimum at the K' or K point. [53,54]he PL from both InAs@ZnSe NCs and 1L-MoS 2 were quenched upon excitation of both components with 532 nm light in the heterostructure configuration.Figure 2a shows a complete quenching of the PL from the InAs@ZnSe NCs (between 1.4 to 1.6 eV) in the HS (red curve) compared to InAs@ZnSe NCs on a Si/SiO 2 substrate (blue curve).Figure 2b shows the PL of 1L-MoS 2 on the SiO 2 /Si substrate (blue curve) and on the HS (red curve), a partial quenching along with a red shift with the A exciton ≈1.82 eV (670 nm) and B exciton is observed ≈1.95 eV (630 nm) which is also indicative of monolayer nature of MoS 2 . [32]he PL quenching in both materials could indicate charge transfer within the InAs@ZnSe/MoS 2 HS and suggest type-II band alignment.Previous reports suggest that PL red shifting and quenching in MoS 2 may result from the n-type doping effect due to specific band alignment at the interface and the formation of trions. [54,55]To gain further understanding of the charge transfer and photoluminescence (PL) quenching mechanisms, we have deconvoluted and fitted the 1L-MoS 2 and InAs@ZnSe/MoS 2 HS PL spectra, as shown in Figure 2b.1L-MoS 2 PL is a combination of exciton (X 0 ) and trion (X T ) emission with E(X 0 ) > E(X T ). [56]he equilibrium of exciton (n X ), trion (n T ), and free carriers (n e ) follows a Boltzmann distribution at low excitation densities, as shown in Equation ( 1): [57][58][59] where m e is the effective mass of the electron, m X, and m T are the effective masses of the exciton and trion, respectively, T is the temperature and E B is the intrinsic trion binding energy.By employing Lorentzian fitting, we calculated the variation in free carrier density (n e ) based on the densities of excitons (X 0 ) and trions (X T ) in both 1L-MoS 2 and the InAs@ZnSe/MoS 2 HS.Remarkably, in the case of the InAs@ZnSe/MoS 2 HS, the n e increases by eightfold compared to 1L-MoS 2 alone.This increase in n T was observed across the whole flake (Figure S4 (Supporting Information). Upon photoexcitation with a 532 nm light source, n-doping of MoS 2 is observed potentially due to both electron transfer processes to 1L-MoS 2 and hole transfer to InAs@ZnSe NCs in the HS.To further investigate the charge transfer processes in InAs@ZnSe/MoS 2 HS, transient absorption pump-probe spectroscopy was used. [60]The broadband transient absorption map is obtained by measuring the variation of supercontinuum source transmitted via both 1L-MoS 2 and InAs@ZnSe/MoS 2 HS on SiO 2 substrate after 532 nm and 800 nm excitation.In this setup, the 532 and 800 nm wavelengths serve as the pump sources, initiating the photoexcitation process.Meanwhile, broadband white light is employed as the probe.The details about experimental setup and other parameters are provided in the experimental section.We mainly look at the excitons of 1L-MoS and specifically the A exciton of 1L-MoS 2 to identify the charge transfer processes.The transient absorption map of 1L-MoS 2 under above bandgap excitation with 532 nm shows a typical bleach signal of the excitonic peaks labeled as A (≈ 670 nm) and B (≈ 630 nm) [60] as shown in Figure S5 (Supporting Information).In the case of below band gap excitation of 1L-MoS 2 (at 800 nm) a very weak noisy data (blue curve in Figure 2c) is observed, indicating a weak interaction of the pump with the sample system. [43]Interestingly, in the case of the InAs@ZnSe/MoS 2 HS under 532 nm excitation (Figure S5d, Supporting Information), slightly more intense bleaching is observed along with slightly longer decay of A exciton compared to the signal from 1L-MoS 2 (see Figure 2d).The decay is fitted with a biexpoential function that includes fast and slow components; the observed values of fast and slow components are 0.29 and 10 ps for 1L-MoS 2 and 0.27 and 12 ps for the InAs@ZnSe/MoS 2 HS, respectively.The sub picosecond times (fast component) indicate the formation time of excitons in 1L-MoS 2 , while the longer time (slow component) is correlated to the exciton lifetimes.The weight of the fast component is reduced from 52% to 37% in the InAs@ZnSe/MoS 2 HS, indicating a competing process such as charge transfer. [61,62]These observations are indicative of charge transfer processes occurring from InAs@ZnSe to 1L-MoS 2 and vice versa, as also suggested by the PL data (Figure 2b).The 532 nm excitation wavelength is above the band gap for both materials (InAs@ZnSe and 1L-MoS 2 ), causing them to absorb light and transfer charge carriers to each other.This results in the ground state depletion, which manifests as the negative signal observed in the transient absorption map.Notably, after selective excitation of InAs@ZnSe NCs in the HS with 800 nm excitation, bleaching of A and B exciton states of 1L-MoS 2 is observed (Figure S5, Supporting Information), despite the below gap excitation.This observation suggests that InAs@ZnSe NCs absorb the 800 nm light and transfer electrons to 1L-MoS 2 . [60]Following spectroscopic verification, we proceeded with the electrical characterization of the InAs@ZnSe/MoS 2 HS.Our objective was to explore the correlation of charge transfer at higher wavelengths between InAs@ZnSe NCs and 1L-MoS 2 .The schematic illustration of the InAs@ZnSe/MoS 2 HS is depicted in Figure 3a, with comprehensive details of the HS implementation provided in Figure S1 (Supporting Information Section 1.1).Initially, we examined the photoconductivity of pure 1L-MoS 2 devices under various excitations ranging from 300-850 nm.InAs@ZnSe NCs were then spin-coated onto the same 1L-MoS 2 flake, as illustrated in Figure 3a.The details about HS implementation are provided in Section 1.1 (Supporting Information). The dark I ds -V ds characteristics of 1L-MoS 2 and InAs@ZnSe/MoS 2 HS are depicted in Figure 3b.The drain current for pristine 1L-MoS 2 is considerably lower than that of the InAs@ZnSe/MoS 2 HS, which experiences a significant increase (approximately ten-fold).[65] This surge in dark current can be ascribed to doping (Figure 2b) [56] and passivation arising from the deposition of InAs@ZnSe on the 1L-MoS 2 , a concept elucidated in our prior research. [57]In detail, 1L-MoS 2 possesses various defects and trap states, such as sulfur and molybdenum vacancies, which make it highly sensitive to O 2 and H 2 adsorption. [66]Coating 1L-MoS 2 with InAs@ZnSe NCs effectively passivates its surface defects and prevents direct exposure to O 2 /H 2 environments, an aspect which will be discussed later. [66]A similar observation was reported by Li et al., who utilized gold chloride hydrate to dope MoS 2 . [67]Their results demonstrated an approximately 32-fold increase in dark current. [67]The power dependence of the I ds-V ds curve for pristine 1L-MoS 2 and InAs@ZnSe/MoS 2 HS under the 532 nm excitation is illustrated in Figure 3c,d.As we increase the power densities from 4.25 to 165.3 μW cm −2 , the drain current experiences a substantial increase at a different bias voltage of 0 to 1 V.For 1L-MoS 2 , the drain current rises from 0.53 to 2.7 nA, whereas for InAs@ZnSe/MoS 2 HS it varies from 4.70 to 48 nA at bias voltage 1 V.The power-dependent photocurrent (ΔI = I light − I dark ) of InAs@ZnSe/MoS2 HS as a function of voltage at various wavelengths is provided in Figure S6 (Supporting Information). The dynamic photoresponse (I-t) characteristics of both pristine 1L-MoS 2 devices and InAs@ZnSe/MoS 2 HS were investigated under various excitations ranging from 300 to 850 nm, as depicted in Figures 4a (also Figuresa,b S7 a,b, Supporting Information) and Figure 4b.1L-MoS 2 exhibits a significant photocurrent in the UV and visible regions. [49]However, when excited with a 700 nm wavelength at maximum power (0.7 mW cm −2 ), the device shows a very poor signal, and further increasing the wavelength to 850 nm results in the photocurrent being undetectable.This behavior is anticipated since the optical absorption cutoff window of pristine 1L-MoS 2 extends up to 680 nm, as documented in prior studies. [14,23]Interestingly, adding InAs@ZnSe NCs onto the 1L-MoS 2 surface (InAs@ZnSe/MoS 2 HS) leads to a notable enhancement in the photocurrent observed across the entire wavelength range from 300 to 850 nm, as shown in Figure 4b.The expansion of the detection window beyond 680 nm can be readily explained by the significant absorption characteristics of InAs@ZnSe NCs in the infrared region. [36,39,41]he multiple dynamic photoresponse and power dependency of the InAs@ZnSe/MoS 2 HS are presented in Figures S8 and S9 (Supporting Information), respectively.These figures demonstrate the excellent stability and reproducibility of the switching behavior of the InAs@ZnSe/MoS 2 HS under different wavelengths.To further explore signal detection, including in the infrared range, we calculated the signal-to-noise ratios (SNRs) using the formula [23,68] (SNRs = (I light -I dark )/I dark ) as shown in Figure 4c.Our observations revealed that the InAs@ZnSe/MoS 2 HS exhibits superior SNRs compared to pristine 1L-MoS 2 .Notably, at 700 nm, the SNRs percentage increased by approximately 3 orders of magnitude, and at 850 nm, by approximately 4 orders of magnitude, surpassing all previous reports. [23]The multiple dynamic photo responses under 850 nm excitation are depicted in Figure 4d, where "ON" indicates that the light is falling on the device, and "OFF" indicates that the light is switched off.The multiple switching characteristics demonstrate that the InAs@ZnSe/MoS 2 HS exhibits high stability under 850 nm.For a comprehensive understanding of the performance of the InAs@ZnSe/MoS 2 HS, key parameters such as responsivity (R) and detectivity (D * ) are calculated using the formula provided below: where ΔI is the photocurrent, and P is the power density.A is the active area of the device, e is the electronic charge, and I d is the dark current.The values of R as a function of different wavelengths for pristine 1L-MoS 2 and InAs@ZnSe/MoS 2 HS are depicted in Figure 5a.As explained above, the spectral variation in R demonstrates that 1L-MoS 2 exhibits a weak response at higher wavelengths.Interestingly, in the case of InAs@ZnSe/MoS 2 HS, the overall response is increased compared to pristine 1L-MoS 2 , with the device showing very high photoresponse at 700 nm.The value of R for 1L-MoS 2 and InAs@ZnSe/MoS 2 HS at 700 nm (V ds = 1 V and power density = 0.7 mW cm −2 ) is 1.35 and 1557.6 A/W, respectively, and for 850 nm, pure 1L-MoS 2 is ≈0.05 mA/W, while InAs@ZnSe/MoS 2 HS is at 10.58 A/W (The detailed calculation is provided in the section 1.10 Supporting Information).The variation in R as a function of power density is shown in Figure 5b.As we increased the power, the responsivity decreased, which is well documented. [69]The synergistic effect between 1L-MoS 2 and InAs@ZnSe NCs leads to an increase in responsivity compared to pristine MoS 2 .In the UV and visible spectrum, MoS 2 itself contributes to the responsivity. [14]However, with the incorporation of InAs@ZnSe NCs alongside MoS 2 , light absorption is significantly enhanced.As reported by many other groups, this increase in light absorption leads to a substantial increase in responsivity. [49,70]The combined structure effectively traps light energy, further enhancing device performance. [27]These findings are in line with previous reports and underscore the effectiveness of the composite structure in improving light absorption and device performance in UV visible and Infrared windows. [27,49,69,71]he enhanced response in the infrared region (700 and 850 nm), where 1L-MoS 2 is transparent, [30] implies that InAs@ZnSe NCs absorb light within this range and interact with MoS 2 to generate a photocurrent, as shown in Figure 5c.The photocurrent is observed in 1L-MoS 2 at 700 nm, which is very weak (Figure 4a), suggesting that InAs@ZnSe also plays a significant role in this spectral region.Instead, at 850 nm excitation, only InAs@ZnSe b) InAs@ZnSe/MoS 2 HS under similar conditions.Power densities at wavelengths of 300 nm (1.53 mW cm −2 ), 532 nm (0.0165 mW cm −2 ), 700 nm (0.7 mW cm −2 ), and 850 nm (2.55 mW cm −2 ) were applied with V ds = 1 V and V g = 0 V. c) The signal-to-noise ratio (derived from a) for pristine 1L-MoS 2 and InAs@ZnSe/MoS 2 HS (derived from b) under the same conditions is illustrated.d) The multiple dynamic photoresponse under 850 nm excitation with a power density of 2.55 mW cm −2 . NCs absorb the photons, hence, playing a major role in increasing the photoresponse of 1L-MoS 2 .The detectivity (D*) of the pristine 1L-MoS 2 and InAs@ZnSe/MoS 2 HS was calculated using the formula provided above.It was found that the detectivity of 1L-MoS 2 is on the order of 10 10 Jones, whereas for InAs@ZnSe/MoS 2 HS, it significantly improves to the order of 10 11 Jones.The spectral variation of D* follows a similar trend to R [72] given in Figures S10a,b (Supporting Information).The response time of the device was calculated, and it was found the rise time ( r ), and fall time ( f ) in order of a few seconds (≈4s to 13s for InAs@ZnSe/MoS 2 HS).A detailed discussion is provided in Supporting Information Section 1.12 and Figures S11a-d (Supporting Information). A comparison table is provided in Table 1 below, where we have compared our device performance with other broad-band-based reported results. The device stability is monitored and it has been observed that after 6 months, the dark current (Figure S12, Supporting Infor-mation) remains approximately consistent with its initial level, indicating the robustness of the InAs@ZnSe/MoS 2 HS.To explore further the device mechanism and existence of trap states, we have calculated the value [81] where is the dimensionless exponent of the power law that gives information about the trap (for minority carriers) present in the photodetection system, and b is a parameter related to the photodetector. [81]It is known that when the value is 1, it indicates the device is free from traps.The device responsivity remains constant with illumination power, which is known as the photoconductive effect.When is less than 1, it indicates that the photodetector device presents trap states, and the responsivity depends sub-linearly on power density. [82]owever, the MoS 2 and the InAs@ZnSe/MoS 2 HS do not reach a steady state (A detailed discussion is given in Section 4a (for 480 nm (0.45 mW cm −2 ) and 633 nm (0.53 mW cm −2 )), b) responsivity as a function of power for the InAs@ZnSe/MoS 2 HS device at a wavelength of 700 nm under fixed bias voltage (V ds = 1 V).c) The schematic illustration depicts the process of charge transfer between 1L-MoS 2 and InAs@ZnSe NCs HS.The lines connecting the data points (a, and c) serve as a guide to the eye only. 1.14, Figure S13-S15 Supporting Information), rendering it impractical to study power-dependent photocurrent through this approach.Consequently, we conducted a qualitative investigation of transient photocurrent.The time derivative of the photoexcited charge density (n), once the illumination is stopped, is governed by a function of n itself [83] dn dt = g (n) (5) Assuming that the photocurrent is proportional to n, the function g(n) can be determined from the time dependence of ΔI during the decay measurement.In Figure S16 a,b (Supporting Information) we show the derivative of photocurrent ( dΔI dt ) plotted against ΔI subsequent to the light stop.We found that for MoS 2 a polynomial of the 2.68 order, while for HS a polynomial of the 2.66 order, both provide an excellent fit to the data (as indicated by the dashed line in Figure S16 a,b, Supporting Information) suggesting that the differential equation governing generationrecombination under illumination is [83] dn dt = F − R r n 1 (6) here, F denotes the generation rate due to incident light, while R r represents a constant controlling the recombination rate.At steady state conditions ( dn dt = 0), solving Equation ( 6) yields a dependence of ΔI on F as ΔI ≈ n ≈ F , with MoS 2 = 0.373 for pristine MoS 2 and HS = 0.376 for InAs@ZnSe/MoS 2 HS.The nonunity values of for both 1L-MoS 2 and HS indicate that the photoconductive response in both materials is predominantly governed by the photogating effect. [13]The observed power-law dependence is attributed to the dynamics of traps and recombination centers facilitating photoconduction in nanostructured materials.Nonlinear variations in photocurrent with light intensity may arise from the distribution of traps and recombination centers within the band gap or the saturation of these states under intense light excitation, which occur similarly in both systems. Conclusion In summary, our study successfully demonstrates the integration of InAs@ZnSe NCs with monolayer MoS 2 via a straightforward spin-coating process, thereby extending the light detection window of the latter.Initially, Raman and PL's studies unveiled charge transfer and doping in MoS 2 alongside the formation of an InAs@ZnSe /MoS 2 HS with a type II band alignment.We further validated this charge transfer using transient absorption spectroscopy, offering direct evidence of charge transfer and its optical properties.Subsequently, we confirmed our findings by implementing an InAs@ZnSe/MoS 2 heterostructure via transport characterization.The device exhibited a broadband photoresponse ranging from 300 to 850 nm with responsivity ≈10 3 A/W.and Detectivity ≈10 11 Jones.The signal-to-noise ratio increases by 3 to 4 orders for 700 and 850 nm excitation.This enhancement in photoresponse and improving the SNR is attributed to MoS 2 doping and increased absorption, which helps eliminate defect and trap states while promoting the photogating effect.Overall, this work establishes a foundation for combining 0D InAs@ZnSe with 2D MoS 2 , bridging fundamental insights into charge transfer physics with potential optoelectronic applications. Synthesis of InA NCs Synthesis: InAs@ZnSe NCs were synthesized using our previously reported method with minor modifications. [39,41]The arsenic precursor was prepared by dissolving 0.2 mmol of amino-As in 0.5 mL of degassed OA at 40 °C for 5 min in an N 2 -filled glovebox.For 1 m TOP-Se precursor, 10 mmol of selenium powder was mixed with 10 mL of TOP in an N 2 -filled glovebox at 250 °C for 30 min.For the synthesis of InAs core NCs, 0.2 mmol of InCl 3 , 4 mmol of ZnCl 2 , and 5 mL of OA were degassed at 120 °C under vacuum for 40 min.The mixture was heated up to 180 °C under N 2 for 30 min, and then it was cooled down to 120 °C and degassed under vacuum for an extra 30 min.The mixture was heated up to 240 °C under N 2 , and the As precursor was injected into the flask quickly followed by the injection of 1.2 mL of the DMEA-AlH 3 toluene solution.Then, the temperature was increased to 300 °C and the reaction was carried out for 15 min.The flask was cooled down to 90 °C by removing the heating mantle.For the synthesis of InAs NCs, 1 mL of 1 m TOP-Se precursor was injected into the above InAs core NCs solution, and the flask was heated up to 300 °C and kept at this temperature for 10 min.The NCs were purified by dispersion in toluene and precipitation with ethanol.The final product was dispersed in toluene. Monolayer MoS 2 Synthesis: We utilized the gold-assisted method to exfoliate the monolayer MoS 2 mechanically.These monolayer MoS 2 sheets were then transferred onto a cleaned SiO 2 /Si substrate.Further details regarding the exfoliation process can be found in our previous work and Section 1.1 (Supporting Information) ESI. [32]evice Fabrication: Electrical contacts were fabricated using UVphotolithography.Detailed descriptions and step-by-step instructions for device fabrication can be found in the Electronic Supplementary Information (ESI) Section 1.2 (Supporting Information). InAs@ZnSe/MoS 2 Heterostructure (HS) Formation: After successfully obtaining the monolayer MoS 2 and high quantum yield InAs NCs, it was proceeded by spin-coating the InAs NCs onto the MoS 2 to form a heterostructure.This was accomplished using a spin-coater at 2000 RPM for 1 min. Characterization: The Helios Nanolab 650 by FEI was employed for imaging the nanocrystals and transition metal dichalcogenide (TMDC) monolayers.No specific sample preparation was required for SEM imaging.AFM was performed with the Veeco Multimode/Nanoscope IV system in tapping mode.Data processing was made with the open-source software Gwyddion.The Renishaw Raman system was utilized to collect high-resolution Raman spectroscopy data.The measurements were carried out using a 2400 grating, with a 532 nm excitation wavelength and a power of 0.3 mW, employing a 50X objective lens under ambient conditions. For optical pictures, we utilized a Zeta Profilometer to capture the images.Micro-photoluminescence: The second harmonic of the pump in a chameleon compact OPO laser was used for excitation.Specifications are 532 nm pulsed light with a pulse width of 200 fs, and a repetition rate of 80 MHz.The excitation laser was coupled into IX83 Olympus microscope and focused on the sample with 50X (N.A 0.8) microscope objective lens.The photoluminescence light was collected and collimated by the same objective lens, which was used for detection.The PL light was focused on slit of a Czerny-Turner HRS-500 spectrometer (Princeton Instruments) to resolve the light spectrally.The spectrally resolved light was detected and read using PIXIS CCD camera and Lightfield software (Princeton Instruments). The ultrafast transient absorption spectroscopy was conducted using a Ytterbium-based laser system (Pharos-SP-HP, Light Conversion).The pump wavelengths at 532 and 800 nm were obtained through a commercial optical parametric amplifier system endowed with a second harmonic generation module (Orpheus, Light Conversion) while the supercontinuum white light was generated by focusing a portion of the laser output (at ≈1030 nm) onto a sapphire crystal.The pulse duration of the pump pulses was ≈160 fs while the instant response function, given by the cross-correlation between pump and probe, was estimated experimentally in ≈230 fs.The delay between the pump and probe pulses was tuned by changing the length of the optical path of the probe.All the TA spectra were acquired through the Harpia-TA (Light Conversion) system.Experiments were performed at 50 kHz while the acquisition was obtained using a chopper with frequency of ≈78 Hz.The spot size of the pump at sample was measured in ≈120 μm while the probe diameter was ≈ 60 μm.In the TA spectra and map presented, the chirp of the probe pulse was corrected using Carpetview software (Light Conversion). Transmission electron microscopy (TEM) image was collected by on a JEOL JEM-1400Plus microscope with a thermionic gun (W filament) operated at an acceleration voltage of 120 kV.The diluted NCs solution was drop-cast onto copper TEM grid with an ultrathin carbon.The absorption spectrum was recorded on a Varian Cary 5000 UV-vis−NIR spectrophotometer.The steady-state PL measurement was carried out on an Edinburgh Instruments FLS920 fluorescence spectrometer equipped with an Xe lamp.The sample was prepared by diluting NC samples in 3 mL of toluene in 1 cm path-length quartz cuvettes with airtight screw caps in an N 2 -filled glovebox. The device's electrical characterization was conducted using a Suss Mi-croTec probe station equipped with an Olympus microscope.DC voltage and current measurements were performed using a Keithley 2614B source meter.To excite the sample, a tunable laser from NKT Photonics, with a wavelength range of 480-700 nm, was coupled with a fiber cable from Thor Labs.For 850 nm excitation, we utilized a Thor Labs multi-channel fiber-coupled laser source.Additionally, a Thor Lab LED was employed for 300 nm for UV excitation.An IR card from Thor Labs was used to visualize the laser spot for 850 nm light.Power measurements were carried out using a Thor Lab power meter (PM130D).All measurements were performed under ambient conditions. Figure 1 . Figure 1.a) Transmission electron microscope (TEM) image of InAs NCs shell.b) Absorption (black curve) and photoluminescence (PL) (red curve) spectra of as-synthesized InAs nanocrystals in solution.c) Optical image of Au-assisted MoS 2 on a SiO 2 /Si substrate.The monolayer (1L) MoS 2 is marked with a white circle, while an arrow indicates the bulk or multilayer MoS 2 .d)The SEM image displays an area covered with InAs@ZnSe nanocrystals shaded in grey, while the InAs@ZnSe/MoS 2 heterostructure (HS) is marked in yellow for clarity.e) Raman spectra of MoS 2 (black curve) and InAs@ZnSe/MoS 2 (grey) HS samples under 532 nm excitation (power 0.3 mW) with the same conditions.f) Schematic illustration of InAs@ZnSe NCs and 1L-MoS 2 HS, along with energy level diagrams of 1L-MoS 2 and InAs@ZnSe NCs before contact, representing the type II band alignment. Figure 2 . Figure 2. a) Photoluminescence spectra of InAs@ZnSe NCs (blue curve), and HS (red curve) under the 532 nm excitation show quenching of InAs@ZnSe emission.b) Photoluminescence spectra of MoS 2 (blue curve) and HS (red curve) also show quenching of 1L-MoS 2 emission (532 nm excitation, excitation power of 500 μW, and 5 s integration time).1L-MoS 2 PL was deconvoluted into B exciton (dark blue), A exciton (dark green), and A trion (dark red) by fitting with three Lorentz functions.c) Transient absorption spectra of 1L-MoS 2 (blue curve) and HS (red curve) at 1 ps delay time with 800 nm pump (pump power of 100 μW, at 50 KHz and 120 μm spot size).d) Transient absorption decay of A exciton in 1L-MoS 2 (blue curve) and HS (red curve) with 532 nm pump (pump power of 50 μW, at 50 kHz and 120 μm spot size), both decays were fitted with bi-exponential decay function. Figure 3 . Figure 3. a) Schematic illustration of the InAs@ZnSe/MoS 2 -based HS consisting of monolayer MoS 2 and InAs@ZnSe nanocrystals.b) Dark I-V characteristics of the MoS 2 and InAs@ZnSe/MoS 2 HS device.I-V characteristics of the c) pristine 1L-MoS 2 and d) InAs@ZnSe/MoS 2 HS under 532 nm excitation with increasing excitation power density.The power density values are indicated according to the color coding in panels c and d. Figure 5 . Figure 5. a) Responsivity as a function of wavelengths for pristine 1L-MoS 2 and InAs@ZnSe/MoS 2 HS under the same conditions as depicted in Figure4a(for 480 nm (0.45 mW cm −2 ) and 633 nm (0.53 mW cm −2 )), b) responsivity as a function of power for the InAs@ZnSe/MoS 2 HS device at a wavelength of 700 nm under fixed bias voltage (V ds = 1 V).c) The schematic illustration depicts the process of charge transfer between 1L-MoS 2 and InAs@ZnSe NCs HS.The lines connecting the data points (a, and c) serve as a guide to the eye only. Table 1 . Comparison of the device performance of the InAs@ZnSe/MoS2 hybrid with its individual counterparts.
8,514.8
2024-09-06T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Playing Well on the Data FAIRground: Initiatives and Infrastructure in Research Data Management Over the past five years, Elsevier has focused on implementing FAIR and best practices in data management, from data preservation through reuse. In this paper we describe a series of efforts undertaken in this time to support proper data management practices. In particular, we discuss our journal data policies and their implementation, the current status and future goals for the research data management platform Mendeley Data, and clear and persistent linkages to individual data sets stored on external data repositories from corresponding published papers through partnership with Scholix. Early analysis of our data policies implementation confirms significant disparities at the subject level regarding data sharing practices, with most uptake within disciplines of Physical Sciences. Future directions at Elsevier include implementing better discoverability of linked data within an article and incorporating research data usage metrics. Playing Well on the Data FAIRground: Initiatives and Infrastructure in Research Data Management interoperability and requirements for metadata and data permanence to allow storage and access to this growing body of publicly available research data, through such organizations as the Research Data Alliance (RDA)  .Defining, meeting, and raising the standards for open science, including best practices for research data management, is generally a community effort with global stakeholders.At the 2016 G20 Summit in Hangzhou, the G20 leaders declared their support to FAIR data principles being implemented to promote open science and to enable appropriate access to publicly funded research results [2].Similarly, stakeholder groups such as CODATA and the European Open Science Cloud are actively engaged in enabling FAIR Data Principles throughout the scholarly workflow [3].In specific domains, there are tailored efforts to focus the research data management (RDM) practices of an entire community around these standards.For instance, in the Earth and Space Sciences, a coalition of groups representing the international science community was convened by the American Geophysical Union (AGU), to develop standards to connect researchers, publishers and data repositories in these disciplines to enable FAIR data [4].Despite these ambitious goals, research data management practices are still heterogeneous both geographically and across different areas of research.While most researchers agree that reusing data from others would benefit their research, data sharing is not widespread and researchers report having little experience with data sharing.According to the most recent Open Data Report [5], 73% of academics surveyed said that having access to published research data would benefit their own research, while only 64% are willing to allow others to access their research data.One of the reasons for this disconnect is that despite the growth of information on the importance of data sharing, most scholarly research is still aimed at publishing papers in reputable journals.Sharing and publishing data is not perceived by authors as a priority of their institutions ( [5,6]).It's for this reason we see a natural opportunity for scholarly publishers to take an active role.Manuscript submission, which prompts authors to provide information about their research, is a natural moment to bring research data together with an article: to require and enable data sharing, allow data annotation and connect RDM tools and standards to the publishing workflow.Creating these pathways to open data enables the raw data and the paper to be linked together, without extraneous and new workflows for researchers.We therefore also actively support and are enabling proper Data Citation Practices, as outlined by the Force11 Data Citation Guidelines [7] and have helped lead a convergence of science publishers on modes and systems of data citation [8].Proper data citation practices can support citation counts, downloads and views of data sets, which can act as important metrics to establish review and reuse of data and serve to motivate the scholarly community to share and publish their data.  NB: For the purposes of this article, data sharing will largely be defined as how data are saved, shared, cited and trusted, with each of these components incorporating several layers.Moreover, we will use "research data" interchangeably to encompass raw data, code, software and other research objects.We recognize that different communities will focus on the sharing and creation of different research objects and it is not our intention to impose a definition of those digital research output objects.There has been widespread agreement on standards that come from such discussions with the Research Data Alliance, Force 11, and FAIRsharing, with nuanced understanding about different kinds of data and the domain-specific repositories that might host them. Playing Well on the Data FAIRground: Initiatives and Infrastructure in Research Data Management Below, we discuss a series of initiatives taken largely over the last few years to facilitate proper practices for data deposition, curation and discovery.This paper is organized as follows: first, we discuss the overall principles behind our RDM practices and tools (2.1); then, we discuss a series of efforts that we have engaged in, together with the community of stakeholders, over the past five years, and the practical outcomes we have seen from these efforts (2.2 -2.6).Lastly, we discuss the implications of these efforts, and some thoughts on moving forward with this important challenge. Overall Vision on Research Data Management Over the past five years, we have developed multiple initiatives aimed at promoting data management and sharing, discussed in the rest of this section.Throughout these efforts, we have been driven by an overarching idea of a "data Maslow hierarchy", as depicted in Figure 1 below from [9].The idea behind this figure is that all components of data sharing support the "highest" goal (that of data reuse), but this goal cannot be obtained unless the "lower-level components" are in place, i.e. data must be stored, before it can be accessed; it must be accessible, to be reused.In our educational outreach (see e.g.Researcher Academy [10]), we consistently emphasize that good data management starts in the research planning phase, and an important role is played by a fruitful interaction with data librarians, data stewards and curators and others at the researchers' home institution or in their specific community of practice. In the remainder of this section, we will discuss a series of efforts which we have undertaken to support this vision: .Research Data Deposition and Citation: The TOP Guidelines As a first step to address the growing demand for guidance and tools to address calls for transparency, openness and reproducibility of research, we implemented a series of data citation guidelines and support to all applicable journals (approximately 2,200) using standard reference styles in 2016 [12].In September 2017, we introduced a five-tiered data sharing policy across more than 1,700 of these titles.These policies were developed internally, in tandem with the Transparency and Openness (TOP) guidelines that were established by the Center for Open Science (to which Elsevier was a signatory [13].The policy options are: Playing Well on the Data FAIRground: Initiatives and Infrastructure in Research Data Management D) Authors are required to deposit their data in a repository, cite and link to the data set in the article (no option of an Author Statement about why data cannot be shared) E) Authors are required to deposit their data in a relevant repository, cite and link to the data set in their article, and peer reviewers are asked to review the data prior to publication [14]. In our initial roll-out, of the journals eligible for the policy implementation, the majority were set with the default of Option B (Table 1).(Ineligible journals include case reports, which include little additional data or potentially sensitive patient data whose risk of exposure outweighs the sharing benefit, review journals, or journals not on a centralized editorial system; in this last example, off-system journals could and often do have data policies, but they are not enforceable, or trackable, through an editorial platform).Though Option B does not require data deposition, foregrounding data policy at the level of the article submission process is intended to heighten researcher awareness of best research data management practice.Moreover, this range of policies was designed with a range of communities and users in mind; publishers and editors were able to use these as starting points of discussion to apply to individual journals, so that a journal's data policy would be most informed by the existing practice within a specific research community.We had previously conducted a survey among 113 editors from a range of disciplines in August 2017, exploring attitudes and perceptions about data policies.Most editors considered their authors would be willing to share research data at time of publication (56 respondents answered "moderately willing", 20 respondents answered "very willing"; only 5 editors thought their authors would be "not at all" willing).Not surprisingly, most editors considered the most effective way to share data (prior to data policy implementation) was to either include them within a journal article (19 respondents) or as supplementary data to an article (56 respondents), in an appropriate data repository chosen by only 37, or 32.4% of respondents.Data sharing policies, then, also became an educational tool about evolving standards of data management.The policies were implemented on journals that were on one of our two editorial systems, EES or EVISE; however, this functionality was implemented on EES slightly later than it was on EVISE.The majority of the "none/parked" column is due to journals that were ineligible for the policy rollout due to editorial platform transitions (e.g.moving from EES to EVISE).The opt-out policies were largely due to community sensitivities expressed by editors. Data sharing can occur at multiple points of the research workflow process; often, it occurs outside of the publication workflow of a paper, meaning that data might be shared to a repository before or after publication of any corresponding research articles.Given our role, we looked to optimize data sharing at point of submission, implementing these data policies while also enabling our editorial systems to Downloaded from http://direct.mit.edu/dint/article-pdf/1/4/350/683840/dint_a_00020.pdf by guest on 12 January 2022 Playing Well on the Data FAIRground: Initiatives and Infrastructure in Research Data Management accommodate their requirements.Our two main editorial platforms, EVISE and EES, were updated so that authors at point of submission could comply with the policies by providing either the DOI, PID, or accession number of their underlying data already stored on an repository, or by uploading data directly to Mendeley Data (on which, more below in Section 2.3) as a co-submission, or providing the Research Data Availability statement directly with their article submission.At present, this statement is explicitly oriented toward an explanation of why data cannot be shared; this is a potential area for further investigation to see if modifications to the requested author statement might lead to greater data sharing at point of submission. In approximately 18 months, only 3% of all articles handled in EES included a link to shared data and 4% of those in EVISE did, out of nearly 220,000 articles handled on EES and over 1.6 million articles handled on EVISE (it is possible that this 1% discrepancy in uptake between the two editorial systems is due to EES's functionality for the data policy implementation coming on board later than EVISE).Exploring the results by subject area, we do see meaningful disparities among them.Over half of the papers that shared data at point of submission in both systems were in the Physical Sciences (55% EES, 52% EVISE).Health and Medical Sciences, for which data sets include "Clinical Trials", followed at 16% in both EES and EVISE.The areas of greatest uptake in the Physical Sciences include Energy and Earth Sciences, Environmental, Agricultural, and Aquatic Sciences, and Applied Bioscience.To build upon this de facto trend, the Energy and Earth Sciences portfolio in 2018 changed the default policy for the majority of the journals to Option C, which has also had some correlation with deposition to Mendeley Data (see following subsection). In the months ahead we expect that more journals adopt stricter data policies.We are in discussion with communities that have already signaled interest in pushing toward more transparency and increase the rate of submission of research data up front.We will continue to offer communities the data sharing policies that are the best fit for them, with the guiding idea that transparency will continue to ramp up and become established across multiple disciplines.This means that our policies can scale from encouraging data deposition (or a statement) to eventually requiring deposition, based on continuing dialog with key data standards organizations, workshops, and attunement to funder policies. Linking and Finding Data To further support and improve data sharing practices Elsevier implemented Database Linking, working with a multidisciplinary range of repositories and guiding authors through the best practices for correctly citing data.We discuss three initiatives here: the Database Linking Tool, including a cross-stakeholder initiative called "Scholix", the ORCID Link, and the Data Search tool. Database Linking and Scholix Enabling links between a paper and a data set at submission is just one of several potential points at which a researcher might connect the two research objects.This allows for a more automated workflow, so that a paper in review and production always has a link to the relevant data set.Such link follows the Downloaded from http://direct.mit.edu/dint/article-pdf/1/4/350/683840/dint_a_00020.pdf by guest on 12 January 2022 article through publication and enables bidirectional links for reader which makes it a preferred workflow.However, we are also able to support post-publication links to data sets where needed. The Database Linking Tool [15] includes about 80 repositories, including examples such as DRYAD, PANGAEA and HEPData.Database Linking creates a bidirectional link between articles and data repositories, such that data can easily be discovered and accessed.To link to a database, when submitting an article the author can simply include a data DOI or PID or indicate in which repository the data have been deposited.We work with a multidisciplinary range of specific repositories and guiding authors through the best practices about correctly citing data on them; e.g. for repositories with accession numbers or identifiers instead of DOIs we provide summarized instructions by discipline.We require that repositories interested in linking with Elsevier must provide a description and link to the general information about their holdings, use, and policies, as well as links to their formatting information, citation information, XML scheme and coding, and that the repositories themselves are FAIR compliant even if not certified by CoreTrustSeal. When an article with an associated data set is published on ScienceDirect, a link to the repository (and repository logo) is added to the article, making it easy for the reader to find and access the data.For journal articles to meet the minimum of findability (F of FAIR) a link pointing to available data must be provided by the authors (or the data availability statement in its stead). To further improve linking between research literature and research data, a community and multistakeholder initiative was created under the name of Scholix Framework (SCHOlarly LInk eXchange) [16].This effort, of which Elsevier has been one of the initiators, is a conceptual framework for interoperability developed in consensus between data centers, publishers, CrossRef, DataCite, OpenAIRE, among other stakeholders.It proposes a concrete standard approach to exchange data-literature links between established handlers of research objects, such as CrossRef and publishers [17].Elsevier currently organizes bulk uploads of pairs of links between articles and associated data sets (independent of repository) to Scholix-hub CrossRef.We plan to automate these uploads so that these links to data are displayed on ScienceDirect at approximately the time of article publication.However, pairs of links between Mendeley Data (see 2.4) data sets and associated articles (independent of publisher) are currently sent to Scholix-hub DataCite at time of publication.Elsevier journal articles bear a "Research Data for this Article" section which is informed by a query from Science Direct to OpenAire asking for any linked research data to the article (data, software, accession numbers).Any member of the community is also able to autonomously retrieve these pairs of links article-research data and the reverse pairs by querying OpenAire [18] or directly DataCite [19]. Linking Data sets to ORCID Identifiers As one of the founding sponsors of the ORCID project and its ancillaries, Elsevier has been invested in making ORCID a standard for the identification of scholars since its inception in 2012 [21].Since 2012, Elsevier's editorial systems have supported the identification of authors with ORCID as part of the manuscript submission process.Linking to an ORCID is also available for data sets deposited in Mendeley Data Repository, albeit currently only indirectly via the Mendeley profile of the author being connected to the The Mendeley Data Search engine Driving discoverability of the data and facilitating linking between Elsevier-published articles and external repositories, as well as collaborating with subject specific initiatives to further increase transparency, is another key initiative.The Search function of Mendeley Data is a data search engine which initially went live as a standalone tool in June 2016.It is now integrated with the Mendeley Data [22] platform and it is openly accessible.It currently indexes over 10 million data sets from 35 supporting external repositories including Zenodo, PANGAEA and DRYAD, as well as Mendeley Data Repository itself.Its Push API allows any repository to push their data resulting in the latter appearing in Mendeley Data Search results.Furthermore, it continues to evolve to employ the latest advancements in search technology (e.g.relevancy of results is enhanced by deep indexing of data).Mendeley Data Search allows researchers to search for different data types and formats across a variety of domain-specific and cross-domain institutional data repositories and other data sources.The results retrieved are rendered with a preview functionality for quick inspection and can be filtered using different facets (repository name, data type, sources, etc.). Infrastructures Supporting Research Data Sharing Our hub for the complete research data management lifecycle, which further supports standardized data sharing, is the data repository and its suite of related functions in Mendeley called Mendeley Data.Working closely with partner institutions to understand what successful data management is, Mendeley Data provides a modular research data management ecosystem which integrates through open APIs with the global data ecosystem, including DANS, DataCite, OpenAIRE, ORCID and repositories.The product consists of five modules: Data Search, Notebook, Manager, Repository and Monitor.At present, the Repository, Search, and Manager are live.Notebook has been designed to become an Electronic Lab Notebook (ELN) integrated with the rest of the Mendeley Data platform and built upon the lessons learned from the standalone ELN Hivebench.Mendeley Data Monitor has been piloted with a number of development partner institutions and will be implemented in the near future [23]. Each module covers different aspects of the research data lifecycle.Crucially, researchers who currently do not share data, or who find it very difficult or labor intensive to do so, identify either legal issues (e.g.confidentiality/ethical issues), formatting (e.g.presenting data clearly), logistics (e.g.where to upload) or data cleaning (e.g.making the data usable) as their main obstacles [24].The vision for the Mendeley Data platform is to provide researchers and institutions flexibility in meeting their RDM needs along the research data lifecycle.For example, researchers can set embargoes for their uploaded data sets so that they are only publicly available after a deferred date is reached.We also will offer institutions the ability to customize the metadata supplied for data sets in the repository, to supplement the standard metadata requirements and allow for greater detail in annotation to align with institution-specific data management policies.Mendeley Data Repository is a general (not subject-specific) repository, with long-term and guaranteed preservation of data through a dark archiving agreement with DANS and which mints published data sets with DOIs and links to authors' ORCID identifiers.It is also a recipient of the Data Seal of Approval from CoreTrustSeal which assesses repositories on eighteen metrics in alignment with FAIR principles [25].One key initiative over the course of 2017 was enabling our editorial systems (EES and EVISE) to directly connect to Mendeley Data Repository among other repositories and to allow authors to upload research data at point of submission of their articles.This prompt has proven to be a significant motivator for sharing data.From 2017 until December 2018, we have seen over 3,700 data sets across life sciences, physical sciences, and health sciences uploaded to the repository.Below are the subject areas which contributed data sets representing over 5% of the depositions.With caveats in interpreting this data, it does seem clear that one outcome of the Earth and Energy Sciences portfolio adopting a less open-ended data policy was the spike in data sets deposited to Mendeley Data.In Physics, all software associated with articles in Computer Physics Communications are published on Mendeley Data with an open license (about 400 computer programs since May 2016).In addition, the associated Program Library at Queen's University Belfast [26] is also being imported (more than 3,000 computer programs stored since 1969).The licenses of imported codes are converted to open ones, making the resulting library on Mendeley Data easily findable and freely available.Mendeley Data as a general repository, however, means we would not expect to see significant take-up in areas where established and familiar repositories exist; for example, in Chemistry, which accounts for just 5.5% of the data deposition, it is clear that these typically have data sets associated with them but are hosted on subject-specific repositories. In addition to the repository element of Mendeley Data, the Manager module serves institutional users with a collaborative Project environment and workflow tool that enables researchers to share, organize and jointly annotate data in one place.This allows to prepare data to be published and shared in the form of a data set.Short term development of this module aims to not only provide researchers and institutions the opportunity to enrich their data sets with custom metadata but also to integrate with tools in the ecosystem, data sources and repositories both up and downstream of a data set creation. Playing Well on the Data FAIRground: Initiatives and Infrastructure in Research Data Management Further development for Mendeley Data will focus on institutional customers who are looking to monitor data created by their researchers, e.g.tracking whether data exists on a local repository or on a third-party repository, in Mendeley Data Monitor.Providing a deeper understanding into where data live also enables tracking the citation and other metrics around their usage.This publisher-independent workflow toolbox will help librarians improve adoption of data sharing, and thus better comply with new mandates and funder regulation. Role of Data Journals Data and software journals, unknown a few years ago, have proven to be a valuable addition to the landscape, offering another route of findability for research data and including more detail and context than metadata alone generally covers.Data journals supplement the data held in repositories and offer another way to find the data -through A&I services, for example -while contextualizing the data themselves and oftentimes complementing full length research articles.Data journals are also particularly attractive as publication outlets for replication data, or negative results, as these can be outside the aims and scope of traditional field-specific journals, but important for other researchers' use.Because data journals offer their authors validation of their data via peer review, the data (negative or positive) are credentialed and the researcher has their research recognized by traditional metrics of output, i.e. publication in a peer reviewed journal.Data and software journals are generally also Open Access which increases the visibility of their publications. These open access publications offer a significant incentive to researchers looking to share data.Generally, data journals promote data sharing in a way that can be aligned with institutional priorities, e.g. as formal, indexed publication outlets for research with the familiar metric of publication citations and altmetrics.In many cases they do not contain research data themselves but link to data repositories. Our flagship data journal, Data in Brief, was launched in 2014 and has had a 40.7% CAGR (compound annual growth rate) between 2015-2018, with a CiteScore of .70.Our software journal, SoftwareX, was launched in 2015 and is growing rapidly (expected to exceed 100 publications in 2019); it aims to highlight the impact of software on today's research practice, and on new scientific discoveries in almost all research domains; it also emphasizes the contributions of software developers who are, in part, responsible for this shift in research trends.The validation provided by journals like Data in Brief and SoftwareX that review the data with their descriptors and the software, respectively, is an important tool for researchers seeking trustworthy data from outside their networks.To improve the peer-review process and provide researchers with an easy way to share, discover and run their published code, SoftwareX and several other software journals (including Computer Physics Communications, Future Generation Computer Systems and Cell Systems) have partnered with Code Ocean [27], a cloud-based computational reproducibility platform where researchers can upload their codes and data.Codes are privately shared with the editors and reviewers, and once a code is reviewed and accepted it receives a citable and permanent DOI, meaning that others will be able to access, download and replicate the code.Table 3. Roadmap to implement FAIR data support at Elsevier: high level overview of steps necessary to support FAIR data creation and sharing.Shaded cells green to red refl ect if implementation is in the future (red) or already been initiated (yellow), or otherwise are live (green).Note that the status of these implementations is subject to change as we are continuously revising our implementations with input from all stakeholders in the research community. Playing Well on the Data FAIRground: Initiatives and Infrastructure in Research Data Management This roadmap captures a birds-eye view of how Elsevier is making different aspects of data sharing and creation a reality.It is the result of a series of efforts undertaken by teams spanning Journals, Operations and Product divisions across the company. Following the successful trials of our Earth and Energy Sciences journals, we must urge more of our journals to adopt data policies that require rather than encourage data deposition.As mentioned above, stakeholders in the Earth, Space and Environmental Sciences (Coalition for Publishing Data in the Earth and Space Sciences, or COPDESS) signed a Commitment Statement, to ensure that "research outputs, including data, software, and samples or standard information about them, are open, FAIR, and curated in trusted domain repositories whenever possible, and that other links and information related to scholarly publications follow leading practices for transparency and information" [4].In addition Elsevier is actively pursuing a program to realign the data availability statements that authors provide in line with this Statement.The data statements are now generally encouraged rather than required; to align with FAIR principles, our guides to authors would need to change and our data policies would need to be reconsidered.The intended use of the Author Statement requires reexamination; it is currently promoted as the space for an author to describe why their data cannot be made publicly available.However, the optimal use of this statement would be for authors to describe how the data can be accessed and reused [28].Our data policies can and should continue to evolve, recognizing the likelihood of successful adoption by subject area communities, and supported with technical implementation to facilitate seamless sharing, and remove unnecessary obstacles, for authors. Next to building up on our data policies in a direction that increasingly supports FAIR we are also in the process of reviewing, refining and improving the infrastructures and workflows that enable the necessary research data and literature linking capabilities to enable FAIR data creation.Steps include: improving adoption of ORCID as a researcher identifier also within our RDM platform Mendeley Data, ensuring efficiency to our participation in the Scholix framework (as described in Section 2.3) as well as enabling data citation within an article body and bibliography.This provides the current state on implementing the infrastructure to support FAIR data creation and sharing at Elsevier.A more detailed evaluation of each element of the research data lifecycle will be critical to decide the appropriate next steps to build further improvements toward FAIR. Being part of and integrating with the Research Data ecosystem Further education, in an ongoing dialog with all stakeholders (publishers, funders, repositories, and very much including researchers themselves) is a necessity for achieving more of the goals of open science.Elsevier (and other publishers, and data repositories) meet a real need by providing resources around data sharing best practice.Institutions can and do set RDM policies, but integrating RDM into curricula and established in lab practice tends to lag behind.Publishers and organizations like DataCite are often present now at scientific conferences leading field-specific workshops dedicated to Research Data Management, but feedback is often that finding time to integrate these workflows into institutional practice is challenging Downloaded from http://direct.mit.edu/dint/article-pdf/1/4/350/683840/dint_a_00020.pdf by guest on 12 January 2022 Table 1 . Summary of results of implementation of data sharing policies at Elsevier, 2017-2018.Over 2,200 journals were eligible for data sharing roll out and their editors consulted for the advised policy to be instated. Playing Well on the Data FAIRground: Initiatives and Infrastructure in Research Data Management data Downloaded from http://direct.mit.edu/dint/article-pdf/1/4/350/683840/dint_a_00020.pdf by guest on 12 January 2022 set.Mendeley users are invited to link their profiles to ORCID or another widely used researcher identifier to the data sets generated via Mendeley Data Repository. Table 2 . Deposition of data during manuscript submission to Mendeley Data Repository per subject category, 2017-2018.
6,731.2
2019-11-01T00:00:00.000
[ "Computer Science" ]
Relationship between high intra-abdominal pressure and compliance of the pelvic floor support system in women without pelvic organ prolapse: A finite element analysis Previous studies mainly focused on the relationship between the size of the prolapse and injury to the supporting tissues, but the strain and stress distributions of the supporting tissues as well as high-risk areas of injury are still unknown. To further investigate the effect of supporting tissues on organs and the interactions between organs, this study focused on the relationship between high intra-abdominal pressure and the compliance of the pelvic floor support system in a normal woman without pelvic organ prolapse (POP), using a finite element model of the whole pelvic support system. A healthy female volunteer (55 years old) was scanned using magnetic resonance imaging (MRI) during rest and Valsalva maneuver. According to the pelvic structure contours traced by a gynecologist and anatomic details measured from dynamic MRI, a finite element model of the whole pelvic support system was established, including the uterus, vagina with cavity, cardinal and uterosacral ligaments, levator ani muscle, rectum, bladder, perineal body, pelvis, and obturator internus and coccygeal muscles. This model was imported into ANSYS software, and an implicit iterative method was employed to simulate the biomechanical response with increasing intra-abdominal pressure. Stress and strain distributions of the vaginal wall showed that the posterior wall was more stable than the anterior wall under high intra-abdominal pressure. Displacement at the top of the vagina was larger than that at the bottom, especially in the anterior–posterior direction. These results imply potential injury areas with high intra-abdominal pressure in non-prolapsed women, and provide insight into clinical managements for the prevention and surgical repair plans of POP. Previous studies mainly focused on the relationship between the size of the prolapse and injury to the supporting tissues, but the strain and stress distributions of the supporting tissues as well as high-risk areas of injury are still unknown. To further investigate the effect of supporting tissues on organs and the interactions between organs, this study focused on the relationship between high intra-abdominal pressure and the compliance of the pelvic floor support system in a normal woman without pelvic organ prolapse (POP), using a finite element model of the whole pelvic support system. A healthy female volunteer (55 years old) was scanned using magnetic resonance imaging (MRI) during rest and Valsalva maneuver. According to the pelvic structure contours traced by a gynecologist and anatomic details measured from dynamic MRI, a finite element model of the whole pelvic support system was established, including the uterus, vagina with cavity, cardinal and uterosacral ligaments, levator ani muscle, rectum, bladder, perineal body, pelvis, and obturator internus and coccygeal muscles. This model was imported into ANSYS software, and an implicit iterative method was employed to simulate the biomechanical response with increasing intra-abdominal pressure. Stress and strain distributions of the vaginal wall showed that the posterior wall was more stable than the anterior wall under high intra-abdominal pressure. Displacement at the top of the vagina was larger than that at the bottom, Introduction Pelvic organ prolapse (POP) is a common problem for elder women specifically for the females after postmenopause and is generally associated with defects or injuries of the pelvic floor support system (1). Half of all parous women have experienced POP, and 10-20% lifetime risk need surgical care (2). Increased intra-abdominal pressure, such as with loaded walking, coughing, sneezing, squatting, defecating, and bending, is an important independent risk factor for POP (3). Constipation as well as obesity can also induce chronic high intra-abdominal pressure (2). A number of studies (4-6) have simulated anterior and posterior vaginal wall prolapse under high intra-abdominal pressures. These studies showed that combined injury to the levator ani muscle and the vaginal apex results in anterior vaginal wall prolapse, and combined injury to the levator ani muscle and posterior supporting tissues leads to posterior vaginal wall prolapse. These findings were of great significance for exploring the mechanism of vaginal prolapse. However, these studies mainly focused on the relationship between the size of the prolapse and injury to the supporting tissues, and neglected the strain and stress distributions of the supporting tissues as well as high-risk areas of injury. Another important factor for prolapse is the effect of supporting tissues on organs and the interactions between organs. Few studies have reported these two crucial factors in healthy people. The objective of this study was to simulate the pelvic visceral mechanical response of non-prolapsed women under high intra-abdominal pressure using a three-dimensional (3D) finite element model (FEM) of the pelvic floor support system. The model was established using ANSYS software (ANSYS, Houston, TX, United States), and the anatomy of the single volunteer subject was obtained by magnetic resonance imaging (MRI). Vaginal wall displacement and the distributions of stress and strain in the supporting tissues were calculated, and possible initial damage points in the supporting tissue and the relationship between intra-abdominal pressure and pelvic floor visceral displacement were studied. Reconstruction of 3D FEM One asymptomatic and physical examination confirmed healthy female volunteer (55 years old, BMI: 20.96 kg/m 2 ) with no previous pelvic surgery was recruited. The subject signed informed consent for inclusion in this institutional review board-approved study. This is a 50th demographic percentile subject from a IRB-approved case-control mechanistic cohort study at the Peking University People's Hospital comparing women with anterior vaginal wall prolapse with normal asymptomatic women (Institutional Review Board HUM00012823). Axial, sagittal, and coronal MRI images were acquired while the subject was in the supine position during rest T2 and Valsalva using a 3.0-T GE scanner (Discovery MR750 3.0 T; GE Healthcare, Milwaukee, WI, United States) with a 32-channel, torso phased-array coil. The pelvic structure contours were traced with the following parameters: TR/TE 3,000/102-108 ms; field of view 26-28 cm; slice thickness 4 mm interleaved; gap 1 mm; acquisitions 2, and 90 continuous images were obtained. The MRI images were then imported into the medical image processing software Mimics 10.01 (Materialise Inc., Leuven, Belgium) for 3D calculations and reconstruction by an experienced urogynecologist. The 3D model was segmented into anatomic structures, including pelvic bones, bladder, urethra, vagina, uterus, rectum, obturator internus, cardinal ligaments, uterosacral ligaments, and five branches of the levator ani muscles. Then every single structure above was exported as STL files and imported into Geomagic Studio software (version 12.0; Geomagic, Inc., Morrisville, NC, United States) for more detailed pre-processing, such as smoothing and positioning. Finally, the entire model was imported into ANSYS software version 14.0 (Houston, TX, United States) for the study. The contacts between each supporting tissues and organs were established by sharing contact faces through Boolean operations. The finite element model was meshed with 10-node tetrahedral element and it contained 503368 elements and 696101 nodes. The sectional view of the full model and the front view of organs and supporting tissues were shown in Figures 1C,D, respectively. The detailed description of FEM was described and validated in our previous publication (7). Material properties All material properties were considered linear elastic to simplify the numerical simulations. The elastic modulus of the supporting was based on perineal body data derived from in vivo measurements of healthy nulliparous women (8). Uniaxial tension data from cadaveric specimens (9, 10) was used for the elastic modulus of the apex ligaments. Since there are no existing data describing the fascial properties in the literature, we assumed that the elastic modulus of the fascia was half that of the perineal body, and that the elastic modulus of the abdominal cavity was half that of the fascia. Poisson's ratio for the fascia and abdominal cavity was 0.3 and 0.49, respectively. The mechanical properties of the vagina in previous studies were mostly measured in vitro, and the data varied from one study to another. Considering that the properties of connective tissue in the abdominal cavity in vivo are quite different from those in vitro, and that the vagina was similar to fascia based on the clinical experience, we chose 0.015 MPa instead of using the data in the literature, for this study. The data for the bladder and rectum were scaled up to the same magnitude as that of the vagina according to measurements in previous studies. Regarding the uterus, few studies have reported uterine material properties in nulliparous women. Therefore, we used the characteristics of uterine samples from pregnant women for this analysis. At high intra-abdominal pressure, contraction of the muscles in the pelvic wall (including the peritoneum) leads to higher elastic modulus of tissues (11); thus, the pelvic wall had the same properties as the attached muscles. All of the material parameters mentioned above are presented in Table 1. Boundary conditions It was assumed that the pelvis was fixed, and that all nodes in the pelvis were fully constrained. The ligaments and muscles could not move relative to the pelvis, as shown in Figure 1B. Regarding the load conditions, the intra-abdominal pressure Modulus of elasticity (E) MPa Levator ani muscle (18, Figure 1A). We used the Newton-Raphson method to perform the analysis until convergence was obtained. It took approximately 30 mins to complete the simulation on a computer with an Intel R Core TM i7-4790 processor (IBM Corp., Armonk, NY, United States) with 3.60 GHz CPU and 32.0 G RAM running Windows 7 professional version (Microsoft Corp., Redmond, WA, United States). Compliance Backward and downward displacement of the vagina were observed with increasing intra-abdominal pressure under normal pelvic support. Displacement of the top of the vagina was larger compared with that of the bottom of the vagina. The results were in good agreement with the vaginal displacement observed in dynamic MRI, as shown in Table 2. For the anterior and posterior vaginal walls, 13 nodes were selected from the top to the bottom. The most distal edge of the cervix was set as the C point, and the compliance of the anterior vaginal wall, posterior vaginal wall, and the C point was explored. The vertical, and forward and backward directions were defined as the Z direction and Y direction in a coordinate system, respectively. The largest vertical displacement occurred at the top of the bladder, while the smallest displacement was As the C point of the cervix was supported by the cardinal and uterosacral ligaments complex, its downward compliance was lower than that of the vaginal wall (Figure 2A). The Y direction compliance of the vaginal wall decreased gradually from the top to the bottom, and its descending speed was higher than that in the Z direction (Figure 2). The Y direction displacement compliance of the anterior vaginal wall was higher than the posterior vaginal wall, similar to the Z direction. Distributions of stress and strain in supporting tissues of the pelvic floor On the fascia, ligaments, and muscle, the compressive strength was higher than the tensile and shear strength, and the area with highest strain was more inclined to be injured. In our study, a set of elements with high strain in the pelvic support structure were selected to calculate the maximum principal and shear strains. The maximum positive principal strain showed that the levator ani muscle bore tension, and the maximum negative principal strain indicated that the levator ani muscle underwent compression. As shown in Figure 3A, the levator ani muscle bore tension in the middle of the front and both sides of the back. Higher tension and shear strains were detected from the junction of the levator ani muscle and obturator internus to the junction of the anterior levator ani muscle and pubic bone. Meanwhile, strain at the junction of the levator ani muscle and coccygeal muscle was relatively high, reaching more than 0.15. The area with concentrated strain was detected at the upper connection between the right cardinal ligaments and the cervix, which had an amplitude of more than 0.2 ( Figure 3B). The upper third of the vaginal lateral wall bore high tensile strain, while the lower parts bore high shear strain. The upper anterior vaginal wall and the top of the vagina also bore high shear strain, reaching 0.5 ( Figure 3C). The tensile and shear strains at the junction of the pubocervical fascia and the obturator muscles were higher compared with other tissues, and the maximum shear strain in the YZ direction was more than 0.7 ( Figure 3D). Similar results were found at the rectovaginal fascia and the side of the mesorectum, as shown in Figures 3E,F, respectively. The supporting tissue elements with high strain were selected to calculate the maximum principal strain and shear strain. The edge of the supporting tissues was not considered owing to the strain concentrations that may result from the calculation process. Figures 4A-C summarize the strain values with intra-abdominal pressures ranging from 0.002 to 0.01 MPa for each tissue. Strain values increased with increasing intra-abdominal pressure. The maximum principal strain was detected at the pubocervical fascia (Figure 4A), and the Y-Z shear strain reached 0.9 at the top of the pubocervical fascia (Figure 4C). A concentrated stress area was more likely at the levator ani muscle-obturator internus junction and cardinal ligaments-pelvic junction, which implied a high risk of clinical levator ani muscle and pelvic prolapse. The tension and shear strains for the lateral fascia were higher than those of the middle part connected with the organs. Thus, the probability of fascial and arcus tendineus fasciae pelvis prolapse was higher than that of a tear in the middle part connected with organs. This finding was also in agreement with the clinical result that there was a higher probability of displacement anterior wall prolapse than dilatation anterior wall prolapse. Figures 4D,E show the maximum principle and shear stresses of the pelvic support structures, respectively. A similar trend was observed in that the maximum principle stress and shear stress increased with increasing intra-abdominal pressure. The maximum principle stress was 0.041 MPa at the top of the cardinal ligaments (Figure 4D), and the maximum shear stress was 0.037 MPa at the levator ani muscle-coccygeus junction ( Figure 4E). These results indicated that the levator ani muscle and cardinal ligaments bore high tensile stress, as the maximum stress amplitude was very close between the levator ani muscleobturator internus junction and the top of cardinal ligaments (Figure 4D). This was in line with clinical expectations and previous research results showed that the levator ani muscle and cardinal ligaments were the main support structures in loadbearing. The shear stress in the levator ani muscle was higher than that of the cardinal ligaments ( Figure 4E). Discussion In this study, the compliance of the whole pelvic floor support system in a healthy female was studied using finite element analysis based on MRI. The vaginal wall displacement and the distributions of stress and strain in the supporting tissues were calculated under high intra-abdominal pressure. A similar displacement of anterior vaginal wall was observed between literature (11) and our study (5.58 mm for 100% pelvic floor muscle contraction under a downward pressure of 90 cm H 2 O vs. 5.29 mm under a uniform 100 cm H 2 O). Meanwhile, displacements of the vagina obtained in this study were in agreement with the data measured using clinical dynamic MRI. Figure 5 shows the displacement of the anterior vaginal wall compared with Larson et al.'s results reconstructed from dynamic MRI at rest and during the Valsalva maneuver (12). Our results and previous studies compare favorably. The results were also similar to those using dynamic MRI in asymptomatic volunteers ( Figure 5B). All of these evaluation results confirmed the effectiveness of our study. The posterior vaginal wall was more stable than the anterior vaginal wall in non-prolapsed women under high intra-abdominal pressure. In the vertical direction, the C point at the top was more stable than the anterior and posterior vaginal walls, while the stability of the C point was poorer in the anterior-posterior direction. According to our results, the anterior vaginal wall had the worst stability, which explained the high incidence of cystocele (13), clinically. Previous studies showed that compliance varied in different vaginal regions (14). Compliance was the highest at the top of the vagina and lowest at the vaginal introitus. Our results were consistent with this observation and implied that non-prolapsed women also had compliance during high intra-abdominal pressure, and that vaginal wall movement should be restored intra-operatively, rather than be completely constrained (14). In a previous study (14), compliance in the top and bottom of the anterior vaginal wall was approximately 0.51 mm/cm H 2 O and 0.18 mm/cm H 2 O, respectively. The corresponding values were lower in our study, which was 0.048 and 0.09 mm/cm H 2 O, respectively. However, our results matched well with the dynamic MRI results of the volunteer during the Valsalva as well as physical examination in Peking University People's Hospital. This discrepancy could be explained by two reasons: first, healthy women may vary in vaginal compliance; second, Spahlinger et al.'s research (14) was based on Caucasian anatomy, and our study was based on Asian anatomy. Different perineal body shapes and vaginal lengths could also lead to different compliance results. Regarding strain in the vagina and supporting tissues, we found that strain at the sides of the levator ani muscle and pelvic fascia was higher, indicating that there were high risks of clinical prolapse between the levator ani muscle and arcus tendineus musculi levatoris ani and between the pelvic fascia and arcus tendineus fasciae pelvis. These areas were also susceptible to tear injuries, resulting in a higher risk of injury to the paravaginal supporting tissues. Previous studies reported that the incidence of paravaginal defects in patients with anterior vaginal wall prolapse was 38-80% (15,16), and indicated that debonding of the pubocervical fascia and arcus tendineus fasciae pelvis clinically was the gold standard for diagnosing paravaginal defects (15). This study showed that both the levator ani muscle and the sides of the pelvic fascia were at a high risk of injury, and suggested that attention should be paid to lateral vaginal repair intraoperatively, which was also confirmed in Viana et al.'s study (17). The authors studied 66 women with symptomatic cystocele (grade 2-4) who underwent transvaginal paravaginal repair. Results showed that suspending the vesicovaginal fascia to the arcus tendineus fasciae pelvis was a safe and effective method for the treatment of paravaginal defects in patients with symptomatic cystocele, and the recurrence rate within 1 year was low (8.5%). The long-term effect of traditional anterior vaginal repair is poor, with a high recurrence rate of symptomatic cystocele. In this study, we found that there was a high risk of injury at the top of the cardinal ligaments and cervix. High strains were also detected at the vaginal sidewall and the upper anterior vaginal wall, which may be related to the broadening of the vaginal wall. The results indicated that special attention should be paid to these regions in clinical evaluations, and comprehensive repair plans should be made to avoid complications and reduce the recurrence rate as much as possible. The levator ani muscle and the cardinal sacral ligament complex bore high tensile and shear forces, which theoretically proved that the pelvic floor played an important role in the supporting tissues. This result was consistent with the finding by Chen et al. (4). The levator ani muscle bore high Y-Z shear stress to prevent the pelvic viscera from excessive downward movement. According to previous studies, the levator ani muscle has fiber orientation (18). It holds a strong load-bearing capacity along the horizontal fiber direction, while its vertical fiber direction cannot bear excessive load. The distribution of fibers in the levator ani muscle was from one side of the obturator internus to the other side of the obturator internus Displacement of the anterior vaginal wall during the Valsalva maneuver: (A) Results from our simulation; (B) Results from a previous study (12) and dynamic magnetic resonance imaging (MRI). (19), which was the X direction in our model. Therefore, the levator ani muscle could not bear excessive Y-Z shear load, and the side of the levator ani muscle and the back area connected with the coccygeal muscle was more susceptible to injury. Our results showed that the pelvic floor support system in a healthy female was sensitive to increasing intra-abdominal pressure. Chronic high intra-abdominal pressure, such as with obesity, chronic cough, and chronic constipation are risk factors for POP (20,21). Avoiding high intra-abdominal pressure exercises as well as training to increase pelvic floor muscle strength could reduce the risk of pelvic floor support tissue injury and prevent POP (22). Regarding the boundary conditions in the finite element analysis, previous studies applied the load perpendicular to the vaginal sidewall to simulate the pelvic system during high intra-abdominal pressure. Chen et al. (4) applied a perpendicular pressure to the surface of the anterior vaginal wall, and Luo et al. (6) applied the pressure perpendicular to the nodes on the anterior and posterior vaginal wall, perineal body, and levator ani muscle. However, accurate loads on the vaginal wall surface are difficult to estimate (23), and the anterior vaginal wall in vivo is connected with the pubocervical fascia, not directly exposed to the abdominal cavity. Thus, the simulated perpendicular pressure on the vaginal wall surface differs from the actual conditions in vivo. Chen et al. (23) introduced a new idea in displacement loading by applying a specific displacement on the top of the uterus to simulate intra-abdominal pressure. However, applying displacement constraint only on the top of the uterus leads to inaccurate stress distributions in other organs that are not loaded because intra-abdominal pressure is transmitted to the surface of organs through the peritoneum in vivo. In the current study, we first established a peritoneal structure, and then applied uniform intra-abdominal pressure on its surface. The pressure was transmitted to the surface of the pelvic organs through the connective tissues, which more closely simulated natural pelvic loading. There are limitations that should be acknowledged. First, although the isotropic linear elastic material parameters were derived from previous studies, the material properties may vary with different methods and different measurement conditions. Furthermore, some tissues, such as the perineal body, ligamentous complex, and fascia, exhibited viscoelastic properties; however, we did not consider the effect of a load that changes over time. Second, this study focused only on the passive stretching of the levator ani muscle, as the intra-abdominal pressure was applied by the volunteer under the condition of levator ani muscle relaxation. Anisotropy, hyperelasticity, and active contractility of the levator ani muscle were not taken into account. Conclusion In conclusion, this is the first study to investigate the distributions of strain and stress as well as the high-risk injury areas in the whole pelvic floor support system. Based on the biomechanical characteristics of healthy women, our results showed that the levator ani muscle, the sidewall of the pelvic fascia, proximal of the cardinal ligaments, vaginal sidewall, and the upper anterior vaginal wall were vulnerable to injury due to the development of stress concentrations. These findings can be used to evaluate the potential injury areas in non-prolapsed women under high intra-abdominal pressure. It is also suggested that comprehensive clinical repair plans should be made to reduce post-operative complications and recurrence rates. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by the Ethics Committee at Peking University People's Hospital (Reference Number: IRB00001052-18018). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. Author contributions QR and SR contributed to the conception of the study. JW, XL, and BX performed the clinical experiment. SR contributed significantly to finite element analysis and data analyses. XL performed the manuscript preparation and wrote the manuscript. YL, QR, SR, and XL helped perform the analysis with constructive discussions. All authors contributed to the article and approved the submitted version.
5,601.4
2021-11-15T00:00:00.000
[ "Medicine", "Engineering" ]
Fano Resonance of the Symmetry-Reduced Metal Bar Grating Structure We demonstrate that Fano resonance and even multipole Fano resonance can be obtained in a symmetry-reduced structure composed of gold bars with different bar sizes or bar shapes on a layer of dielectric. There is a transparency window opened within the frequency region of the absorptive dipole resonance by metallic bars, as long as the narrow grating waveguide mode induced by reducing symmetry is coincided in spectrum with the dipole resonance such that a destructive interference happens between these two resonant modes. Line shape of the transmission spectra of the nanostructure can be modulated effectively by changing the size or shape of the series of metal bars. The results found can be useful in the design of novel optical device. Introduction Fano resonance as a coherent phenomenon has emerged as a common characteristic of complex, coupled plasmon system [1].This effect is important in the line shape engineering, and the frequency tunability of plasmonic nanosystems has been well established.Fano resonance can be obtained in materials with negative permeability ( < 0) and positive dielectric permittivity ( > 0), even a material with negative refractive index, where both and are negative [2].Fano resonance can also be obtained in anisotropic materials, where the intensity of the surface plasmon resonance can be greatly enhanced.Much of the original work on plasmonic Fano resonance was carried out on metallic arrays.The broad resonance providing the continuum for the narrow Fano resonance is a strongly radiative collective dipolar mode formed from a coupling of the plasmons on the individual array elements. Metallic structures which lead to Fano resonance are divided into three kinds according to structure.The first are the plasmonic nanostructures such as dolmen-type slab arrangements [3], the nonconcentric ring/disk cavity [4][5][6], symmetry breaking ring/disk [7], and finite clusters of plasmonic nanoparticles [5,8].The second are the metallic photonic crystals; for example, an array of gold nanowires placed on a single-mode slab waveguide exhibits a Fano resonance in extinction in transverse electric polarization owing to coupling between the array and the waveguide.The last are the metamaterials.Fano resonance in metamaterials was observed for the first time in asymmetrically split-ring arrays [9].Then polarization sensitive Fano resonance linked to strong optical activity and circular dichroism in the microwave and optical parts of the spectrum can be engaged through constructing a chiral arrangement of the metamaterial array with respect to the incident electromagnetic wave [10].A Fano metamaterial with polarization-insensitive resonance, with behavior independent of incidence direction of light, has also been introduced [11].Fano resonances have recently been observed in a superconducting metamaterial, promising extremely high-Q modes [12].Fano resonance is associated with the coherent interference of "bright" and "dark" hybridized plasmon modes [1,[13][14][15].Bright plasmon modes possess finite dipole moments, where their resonance is spectrally broadened due to radiative damping.In contrast, dark plasmon modes possess zero or nearly zero dipole moments, do not couple efficiently to light, and are therefore not broadened.The coupling between bright and dark plasmon modes occurs through the electromagnetic near-field and can be controlled using symmetry breaking [16][17][18][19].As Fano resonances arise from the interference between two or more oscillators, they possess an inherent sensitivity to changes in geometry or local environment: small perturbations can induce dramatic resonance or line shape shifts.This property renders Fano resonant media particularly attractive for a range of applications, such as the development of chemical or biological sensors. In this paper, we investigate a Fano resonance planar structure composed of a gold-bar grating placed on a dielectric layer.A distinct structure for the periodic gold grating proposed in this work is the symmetry-reduced arrangement.It is demonstrated that Fano resonance even multipole Fano resonance appears in the transmission spectrum, and there is a transparency window opened within the frequency region of the absorptive dipole resonance by metallic bars, as long as the narrow grating waveguide mode induced by reducing symmetry is coincided in spectrum with the dipole resonance such that a destructive interference happens between these two resonant modes.Line shape of the transmission spectra of the nanostructure can be modulated effectively by changing the size or shape of the series of metal bars.For example, little variation of any bar's size and location in one lattice may lead to a transparency window in the transmission spectrum, and the width and wavelength of the transparency window can be modulated by each bar's size and location in the lattice obviously.Additionally, more modulating factor is introduced by the hollow bar in the lattice; the introduction of the inner size of the metallic bar may lead the line shape of the transmission spectra to be modulated better.The results may be helpful for the design of new optical device. Material and Methods The analyzed structure is presented in Figure 1.In our 2D FDTD calculations [20,21], perfectly matched layer boundary conditions [22] are used at the top and bottom, and periodic boundary conditions are used on the left and right sides of the lattice due to the periodicity of the system.We simulate the structure with a computational window of × = 700 nm × 2000 nm, where the structure in the direction is uniform and infinite.The structure is periodic and the periodicity is = 700 nm, and there are three gold bars contained in the lattice.We send a Gaussian single pulse of light with a wide frequency profile and an incidence angle of 90 ∘ illuminating the metal bar grating from the bottom of the lattice in the cross section.Parameters of the Au grating are denoted as the widths of the three bars in the lattice: 1 , 2 , 3 , in which 1 and 3 are fixed as 100 nm.The shape of the middle bar can be set as a tube, and the inner size of the tube is denoted as 22 .Gaps between the three bars are denoted as 1 , 2 .The offset of localization of the middle bar from the center of the lattice in direction is denoted as .The thickness of the dielectric under the Au bar array is fixed as ℎ = 200 nm, and relative permittivity is set as = 3. Results and Discussions For the symmetric grating array with identical bar size and bar gap, only the dipole plasmon resonance can be excited by incident waves, corresponding to only one transmission dip (as shown in Figure 2 ( 1 = 100 nm)), additionally, there is only one transmission dip, whatever the bar size and gap size is.In contrast, it is interesting to find that there will be a transmission dip at a lower frequency around the dipole resonant transmission dip for the symmetry-reduced bar grating structure with size of the center bar of the lattice getting smaller or larger than the other two.The larger the size of the center bar deviates from the size of other bars, the more obvious the new transmission dip is.Along with offset of the size of the center bar increasing, the new transmission dip gets deeper, which means that its intensity gets stronger; meanwhile, its full width at half maximum gets larger.The new transmission dip with an asymmetrical shape can be attributed to the Fano resonance associated with the coupling of transversal surface plasmon resonance mode and localized surface plasmon resonance mode.In addition, with the increasing of size of center bar of the lattice, center frequency of the new transmission dip hardly moves, but the original transmission dip red shifts obviously accompanied by the increase of full width at half maximum.Besides, the transmission peak between two transmission dips becomes narrow, which forms into a transparency window, as an EITlike phenomenon when the destructive interference happens between a broad resonance (dipole mode) and a narrow one (usually a subradiant mode).Here the subradiant mode resonance comes from the grating waveguide structure.When light is illuminated on the periodic metal nanobar grating, there is surface plasmon dipole mode excited, but there is no grating waveguide mode excited for the symmetric grating configuration with unified bar size and bar gap; however, the numerical result indicates that a symmetry-reduced periodic metal bar grating can make it an excitable mode, and this interesting asymmetry-induced resonance is in accordance with recent literatures [23][24][25].Additionally, the dark mode excitation by the way of breaking or reducing symmetry can be feasible instead of the one by the plasmon-coupling excitation of a radiant resonant mode [26].Besides, another shallow transmission dip appears at the higher frequency range, and its intensity gets stronger as the size of the center bar increases.This transmission dip may also be attributed to the Fano resonance.As a result, multiple Fano resonances can be obtained and modulated effectively by changing the size of one of the three bars in the lattice. We also simulate the transmission characteristics of the metal grating structure as size varies of the side bars in the lattice (as shown in Figure 3).It is shown that the results are the same as those in Figure 2 absolutely. Figure 4 shows the dependence of the transmission spectra of lattice containing 3 bars of a periodic bar array with localization of the center bar; when the center bar is set at the center between the left and the right bar, there is a wide transmission dip in the spectrum associated with the surface plasmon dipole mode.When the center bar moves along the direction, a new transmission dip appears at a lower frequency, which can be attributed to the Fano resonance.With the center bar of the lattice moving on, the Fano resonance mode and the original transmission dip gets wider, but central frequencies of both resonance modes do not shift obviously.Additionally, if the center bar moves further, higher multipolar surface modes can also interfere with the broad dipole mode and generate higher-order Fano resonances when the size of the middle metal bar is increased.In Figure 4, the appearance of the octupolar Fano resonance resulting from these higher order interactions is shown. In Figure 5, the calculated transmission spectra of the lattice containing three metal bars with center bar a hollow tube are shown.It is shown that there is only one transmission dip (the dipole mode) of the inner size of the center bar less than 50 nm.The dipole mode transmission dip red shifts slowly when 22 increases; for example, central frequency moves 0.0012 × 10 14 Hz when inner size of tube rises from 50 nm to 80 nm.As the inner size of the tube increases when 22 is small, the transmission dip moves much faster; it shifts rapidly for 0.1044 × 10 14 Hz as inner size of tube increases from 80 nm to 90 nm when the inner size is large.As the inner size of the center bar increases, a new shallow transmission dip with an intensity appears (0.8663 at 22 = 75 nm); with the inner size increasing continually, the new transmission dip intensity is enhanced dramatically (0.0514 at 22 = 90 nm).Moreover, along with the inner tube size increasing, both the full width at half maximum and central frequency change quickly.Central frequencies of both transmission dips red shift obviously along with 22 increasing; especially, the larger the 22 size is, the faster the central frequency moves.For example, the central frequency moves from 3.3932 × 10 14 Hz to 3.3056 × 10 14 Hz by a 0.0876 × 10 14 Hz, with the inner tube size rising from 85 nm to 90 nm.Additionally, the transparency window between the two transmission dips is associated with the EIT-like phenomenon.The central frequency of the transmission peak hardly moves around 3.4 × 10 14 Hz with 22 increasing with a value close to 1. Additionally, along with the increase of the inner tube size, there is new shallow transmission dip emerging at the higher frequency region, and it may also be attributed to the Fano resonance.So it is improved that Fano resosonance even multiple Fano resonance can be obtained in a periodic metal structure with a series hollow bars, which can be modulated effectively by the dimension of the inner bar size.It is shown in Figure 5 that the dipole mode transmission dip is widened along with the inner size of the tube in the lattice, and it is found that it gets narrow when 22 increases from 85 nm to 90 nm.We can understand it like this: the dipole mode transmission dip red shifts with the inner size of the tube increasing, but the central frequency of the transmission peak between the new transmission dip and the original one stays at the same location all the time, which leads to the result that the left part of the dipole mode transmission dip red-shift is restricted, so the line width is narrowed.It is clear that the bright dipole mode can be transferred into the subradiant waveguiding mode as long as the grating symmetry is reduced, resulting in a transparency narrow band, as shown in the spectra between the dipole mode transmission dip and the Fano resonance (the new transmission dip at the left of the dipole mode) in Figures 2 and 5.It can be understood physically in this way that the destructive interference happens predominantly so that the electromagnetic energy of the dipole resonance is totally transferred to the grating waveguide resonance, which apparently contributes to the transparency window. To explicitly verify the dipole resonance (transmission dip) and the symmetry-reduction induced Fano resonance (the new transmission dip), the resonant E-filed distributions are plotted.It is interesting to found that strong E-field distribution is localized between the bars for the symmetric gold bar array with unified bar size, bar gap, and bar shape, due to the standing wave induced of the dipole mode (as shown in Figure 6). It can be obtained that dipole resonance is excited in the symmetry configuration with equal bar size and equal bar gap (as shown in Figure 6).In the symmetry-reduced configuration with the size or inner size of the middle bar increasing in the lattice, dipole modes distributions are changed, the intensity of the electric field localized around the metal bars is weakened dramatically (as shown in Figures 7 and 8), due to the standing wave induced in the structure, which is corresponding the red-shift of the transmission dip in Figures 2 and 5 in the symmetry reduced configuration. When the plane wave is incident normally to the symmetric metal bar grating, the electric field distributions in the bar grating should have the same vector direction (in phase) and hence it does not excite the transversal grating waveguide mode.However, for the symmetry-reduced case the standing wave of the grating waveguide mode is excitable since the asymmetric bar sizes can break the synchronized phase of the plane wave impinging onto the metal surface; as a result, nodes of the transversal standing wave are formed.The waveguide mode interferes destructively with the channel of absorptive dipole resonance and consequently a transparency window is obtained, which comes from the low-loss nature for the grating modulated waveguiding mode within the dielectric layer.In other words, the Fano resonance mode emerging at the left of the dipole mode transmission dip may be attributed to the coupling of the dipole mode and the waveguide mode, which can be seen clearly in the field distributions in Figures 9 and 10. Conclusion In conclusion, it is found that under the condition of reducing the grating symmetry by altering the size or shape of a series of metal bars in the nanostructure, Fano resonance or even multipole Fano resonance can be obtained in the transmission spectra.A transparency window associated with the EIT-like phenomenon is obtained between the dipole mode and the Fano resonance.Line shape of the transmission spectra of the nanostructure can be modulated effectively by changing the size or shape of the series of metal bars.The results may be helpful for the design of new optical device. Figure 1 : Figure 1: - cross section of one lattice containing three gold bars of the gold periodic bar array on a layer of dielectric for the FDTD simulations.Parameters are defined in the texts. Figure 2 : Figure 2: The transmission spectra of a lattice contain 3 bars of a periodic bar array with size of the center bar of the lattice 2 = 75 nm, 100 nm, 125 nm, and 150 nm, respectively, size of the other two bars of the array is 1 = 3 = 100 nm, distance between bars is 700 nm, and other parameters are 22 = 0, = 0, respectively. Figure 3 : Figure 3: The transmission spectra of a lattice contain 3 bars of a periodic bar array with size of the left bar of the lattice 1 = 75 nm, 100 nm, 125 nm, and 150 nm, respectively, size of the other two bars of the array is 2 = 3 = 100 nm, distance between bars is 700 nm, and other parameters are 22 = 0, = 0, respectively. Figure 4 : Figure 4: The transmission spectra of a lattice contain 3 bars of a periodic bar array with offset from center of the lattice to the localization of the center bar in direction is = 0, 50 nm, 100 nm, and 200 nm, respectively, size of bars of the array is 1 = 2 = 3 = 100 nm, 22 = 0, and distance between bars is 700 nm, respectively. Figure 5 : Figure5: The transmission spectra of a lattice contain 3 bars of a periodic bar array, and the middle bar is a hollow tube with inner size 22 = 50 nm, 75 nm, 80 nm, 85 nm, and 90 nm, respectively, size of bars of the array is 1 = 2 = 3 = 100 nm, distance between bars is 700 nm, and offset is = 0, respectively. Figure 6 : Figure 6: Cross sections of the spatial distribution ⇀ and ⇀ of the field intensity of the dipole mode resonance at the center peak frequency = 3.5240 × 10 14 Hz in the transmission spectrum of the symmetric periodic metal bar array with bar size 1 = 2 = 3 = 100 nm, bar gap 1 = 2 = 700 nm, inner size of the middle bar 22 = 0, and offset = 0. Figure 7 :Figure 8 : Figure 7: Cross sections of the spatial distribution ⇀ and ⇀ of the field intensity of the dipole mode resonance at the center peak frequency = 3.4688 × 10 14 Hz in the transmission spectrum of the symmetric-reducing periodic metal bar array with bar size 1 = 3 = 100 nm, 2 = 150 nm, bar gap 1 = 2 = 700 nm, inner size of the middle bar 22 = 0, and offset = 0. Figure 9 :Figure 10 : Figure 9: Cross sections of the spatial distribution ⇀ and ⇀ of the field intensity of the Fano resonance at the center peak frequency = 3.4064 × 10 14 Hz in the transmission spectrum of the symmetric-reducing periodic metal bar array with bar size 1 = 3 = 100 nm, 2 = 150 nm, and bar gap 1 = 2 = 700 nm, inner bar size 22 = 0, and offset = 0.
4,445
2014-01-01T00:00:00.000
[ "Physics" ]
Universal Seesaw and $0\nu\beta\beta$ in new 3331 left-right symmetric model We consider a class of left-right symmetric model with enlarged gauge group $SU(3)_c \times SU(3)_L \times SU(3)_R \times U(1)_X$ without having scalar bitriplet. In the absence of scalar bitriplet, there is no Dirac mass term for fermions including usual quarks and leptons. We introduce new isosinglet vector-like fermions so that all the fermions get their masses through a universal seesaw mechanism. We extend our discussion to neutrino mass and its implications in neutrinoless double beta decay ($0\nu\beta\beta$). We show that for TeV scale $SU(3)_R$ gauge bosons, the heavy-light neutrino mixing contributes dominantly to $0\nu\beta\beta$ that can be observed at ongoing experiments. Towards the end we also comment on different possible symmetry breaking patterns of this enlarged gauge symmetry to that of the standard model. I. INTRODUCTION The Standard Model (SM) of particle physics has been the most successful phenomenological theory specially after the discovery of its last missing piece, the Higgs boson at the Large Hadron Collider (LHC) back in 2012 with subsequent null results for Beyond Standard Model (BSM) searches. However, the SM fails to address several observed phenomena as well as theoretical questions. For example, it fails to explain the sub-eV neutrino mass [1][2][3][4][5][6], the origin of parity violation in weak interactions and the origin of three fermion families. The first two questions can be naturally addressed within the framework of the Left-right symmetric model (LRSM) [7,8], one of the most widely studied BSM frameworks. These models not only explain tiny neutrino masses naturally through seesaw mechanism but also give rise to an effective parity violating SM at low energy through spontaneous breaking of a parity preserving symmetry at high scale. The conventional LRSM based on the gauge group SU (3) c × SU (2) L × SU (2) R × U (1) B−L can be enhanced to a more general LRSM based on the gauge group SU (3) c × SU (3) L × SU (3) R × U (1) X (or in short 3331). The advantage of such an up gradation of the gauge symmetry is the ability of the latter in providing an explanation to the origin of three fermion families of the SM in addition to having other generic features of the LRSM. In such a model, the number of three fermion generations is no longer a choice, but a necessity in order to cancel chiral anomalies. In such models, where the usual lepton and quark representations are enlarged from a fundamental of SU (2) L in the SM to a fundamental of SU (3) L , the number of generations must be equal to the number of colors in order to cancel the anomalies [9]. This is in contrast with the SM or the usual LRSM where the gauge anomalies are canceled within each fermion generation separately. One can also build such a sequential 3331 model by including additional chiral fermions. But * Electronic address<EMAIL_ADDRESS>† Electronic address<EMAIL_ADDRESS>since such a model does not explain the origin of three families from the anomaly cancellation point of view and contain non-minimal chiral fermion content, we stick to discussing a special type of non-sequential 3331 model here. There have been a few works [10][11][12][13][14] recently done within the framework of such 3331 models with different motivations. Particularly from the origin of neutrino mass point of view, the work [10] considered a scalar sector comprising of bitriplets plus sextets which gives rise to tiny neutrino masses through canonical type I [15] and type II [16,17] seesaw. Another recent work [12] studied a specific 3331 model with bitriplet and triplet scalar fields that can explain tiny neutrino masses through inverse [18,19] and linear seesaw mechanism [19]. The earlier work [13] considered effective higher dimensional operators to explain fermion masses in 3331 models while the recent work [14] studied the model and several of its variants from LHC phenomenology point of view. Here, we simply consider another possible way of generating fermion masses in 3331 models through the universal seesaw mechanism [20][21][22] where all fermions acquire their masses through a common seesaw mechanism 1 . Incorporating additional vector like fermion pairs corresponding to each fermion generation, we show that the correct fermion mass spectrum can be generated in such a model with a scalar sector where all of them transform as fundamentals under SU (3) L,R without the need of bi-fundamental and sextet scalars shown in [10,12] for the implementation of different seesaw mechanism for neutrino masses. We also discuss the possibilities of light neutral fermions apart from sub-eV active neutrinos, their role in neutrinoless double beta decay (0νββ) and different possible symmetry breaking chains of the gauge symmetry SU (3) c × SU (3) L × SU (3) R × U (1) X to that of the SM. We show that for TeV scale SU (3) R gauge bosons, the right handed neutrinos are constrained to lie around the keV mass regime having interesting conse-quences for 0νββ. We find that although the pure heavy neutrino contribution to 0νββ remains suppressed compared to the one from light neutrinos, the heavy-light neutrino mixing which can be quite large in this model without any fine-tuning, gives a large contribution to 0νββ keeping it within experimental reach. This letter is organized as follows. In section II we briefly discuss the model with the details of the particle spectrum, fermion masses via universal seesaw and gauge boson masses. In sections III and IV we discuss the contributions to 0νββ from purely light (heavy) neutrinos and heavy-light neutrino mixing respectively. Finally we discuss about different possible symmetry breaking chains in section V and then conclude in section VI. II. THE MODEL FRAMEWORK A. Particle Spectrum Assuming q = 0, if the neutral components of the scalar fields acquire their vacuum expectation value (vev) as B. Fermion Mass The Yukawa Lagrangian can be written as After integrating out the heavy fermions, we can write FIG. 1: Feynman diagram for Dirac mass of fermions within down the effective Yukawa terms for charged fermions of the standard model as follows Similarly, the heavy neutral singlet fields N L,R can be integrated out to generate the effective mass matrix of neutrinos ν L , ν R which contains a Dirac mass term and two Majorana mass terms. The effective Dirac mass as well as Majorana mass terms are given by There are additional neutrino leptons ξ L , ξ R which acquire Dirac and Majorana masses similar to ν L,R shown above. They are given by The origin of the Dirac masses can be understood from the mass diagrams shown in figure 1 whereas the Majorana mass diagrams are given in figure 2. Apart from these, the light neutrinos also receive non-leading contribution to their masses from the diagram shown in figure 3. The contribution of this diagram can be written as Here we are assuming equality of left and right sector Yukawa couplings as well as masses M LL = M RR = M N . The approximate scale of light neutrino mass matrix M L has to be less than 0.1 eV which puts limit on model parameters Y N , M RR . For example, if Y N 0.01 then M RR ≥ 10 10 GeV to keep M L ≤ 0.1 eV. Further lowering the scale of M RR will involve more fine-tuning in the Yukawa coupling Y N . In the limit of tiny M D , the light neutrino mass solely originates from M L given in equation (5) and the mixing between heavy and light neutrinos can be neglected. In such a case, the light neutral lepton mass matrix can be written in the basis (ν L , ν R ) as As a result of this particular structure of the mass matrix, both the mass eigenvalues for left-handed and righthanded neutrinos are proportional to each other. The two mixing matrices are related as where U is the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) leptonic mixing matrix. For a representative sets of input model parameters like v L ≈ 174 GeV, v R ≈ 10 TeV, and m i ≈ 0.1 eV, the right-handed neutrino masses lie in the range of keV scale. Such light keV scale right handed neutrinos can also have very interesting implications for cosmology [31]. We leave such a detailed study of this model from cosmology point of view to an upcoming work. C. Gauge Boson Mass The relevant kinetic terms leading to gauge boson masses are given by where the factor W L,R µ is defined as Using respective vev's for scalar fields shown in equation (2) • One massless photon A, • Four neutral gauge bosons Z L,R , Z L,R , • Four charged gauge bosons W ± L,R , • Four gauge bosons with charge q + 1, X • Four gauge bosons with charge q, Y ± L,R . III. 0νββ WITH PURELY LIGHT (HEAVY) NEUTRINO CONTRIBUTIONS We find that the light sub-eV scale left-handed neutrinos (ν L ) and the heavy right-handed neutrinos (ν R ) with keV scale masses can give sizable contributions to neutrinoless double beta decay. Since the bitriplet scalar is absent in the present left-right symmetric 3331 model, there are no Dirac mass term for light neutrinos at tree level. However one can eventually generate Majorana masses for ν L and ν R ≡ N R through universal seesaw, see Fig.2. Such Majorana nature of neutrinos violate lepton number by two units and thus, contributes to 0νββ decay. The Feynman diagram for 0νββ decay is depicted in Fig.7 due to exchange of left-handed as well as right-handed neutrinos. The corresponding Feynman amplitudes due to exchange of left-handed and right-handed neutrinos are given by The inverse half-life for a given isotope for 0νββ decaydue to exchange of left-handed light neutrinos via lefthanded currents, and right-handed neutrinos via righthanded currents is given by where G 01 is 0νββ phase space factor, M i correspond to the nuclear matrix elements (NME) and η i is the corresponding dimensionless particle physics parameter. Since we have M i ≈ 1 − 10 MeV masses of right-handed neutrinos in the present model and satisfying |M 2 i | p 2 where p being the neutrino virtually momentum around 100 MeV, the NMEs for right-handed neutrinos and lefthanded neutrinos are same i.e, M ν = M N . ηi Effective Mass Parameter A. Standard mechanism via left-handed neutrinos νL The standard mechanism for neutrinoless double beta decay due to exchange of light left-handed neutrinos via left-handed currents gives dimensionless particle physics parameter as, Here, m e is the electron mass. The effective mass parameter for standard mechanism is explicitly given by In the present left-right symmetric 3331 model, we found that the right-handed neutrino mass lies around a few keV (for TeV scale W R ) which is much less than its momentum, M i |p|. Under this condition, the propagator simplifies in a similar way way as for the light neutrino exchange, This results dimensionless particle physics parameter η ν R due to exchange of right-handed neutrinos via righthanded currents as, where the proportionality relation between η ν R and η ν L appears at the last step due to the proportionality between heavy and light neutrino mass matrix discussed above. After a little simplification, the effective mass parameter due to exchange of right-handed neutrinos can be expressed as, It is clear from eq.(6) that both light and heavy neutrino mass eigenvalues are proportional to each other as Thus, one can express W R mass as As a result, the effective Majorana mass parameter-with g L ≈ g R and M W ≈ 1 2 g L v L -is modified to Comparing this with the light neutrino contribution it is straightforward to estimate that the heavy neutrino contribution is suppressed by a factor of m i /M i = (v L /v R ) 2 compared to the light neutrino contribution. C. Numerical Results The total contribution to inverse half-life for neutrinoless double beta decay for a given isotope in the present left-right symmetric 3331 model due to exchange of lefthanded as well as right-handed neutrinos is given by where, For NH Pattern For normal hierarchical (NH) pattern of light neutrinos we consider the following mass structures for left-handed and right-handed neutrinos, where M 3 is fixed around few keV range, as a result of choosing the W R mass scale at a few TeV. The analytic form for effective mass parameters due to exchange of right-handed neutrinos is given by For IH Pattern Similarly for inverse hierarchical (IH) pattern of light neutrinos, the masses for light left-handed and right-handed neutrinos are fixed as, where we fixed the heaviest right-handed neutrino mass M 2 around few keV. The effective mass parameters due to exchange of right-handed neutrinos is given by We have generated effective mass (left-panel) and halflife (right-pannel) with the variation of lightest neutrino mass, m ν1 for NH and m ν3 for IH as shown in Fig.5. The green and red lines are for NH and IH pattern of light neutrinos applicable for both left-and right-panel. The horizontal dashed line is for bound from 0νββ experiments while the vertical dashed lines along with brown shaded regions are excluded from Planck limit. The present bound on half-life is T 0ν 1/2 ( 136 Xe) > 1.07 × 10 26 yr at 90% C.L. from KamLAND-Zen [30]. It is found that the new physics contributions to 0νββ is very much suppressed and the standard mechanism due to exchange of light neutrinos are dominant. The total contribution to 0νββ is similar to the light neutrino contribution, saturating the experimental bound only near the quasidegenerate regime, clearly visible from the plots shown in figure 5. IV. 0νββ WITH HEAVY-LIGHT NEUTRINO MIXING Though the purely heavy neutrino contribution to 0νββ remains suppressed, there can be sizable contributions from heavy-light neutrino mixing diagrams. This heavy-light neutrino mixing can also contribute to light neutrino masses through a type I seesaw formula which was ignored in the above discussion for simplicity. Such an assumption is valid for negligible heavy-light neutrino mixing. However, if we go beyond this simple assumption or equivalently consider M LR ≈ M RR , then the Dirac as well as Majorana mass matrices for ν L and ν R ≡ N R can be written as Thus, the neutral lepton mass matrix in the basis (ν L , N R ) as In the limit M L M D M R , the type-I seesaw contribution to light neutrino mass is given by and the light-heavy neutrino mixing is proportional to With v 2L ≈ 174 GeV and v 2R around few TeV, we find that light-heavy neutrino mixing is large of the order of ≤ 0.1. This large value of lightheavy neutrino mixing where heavy neutrinos are fixed at few keV scale, can contribute to 0νββ significantly. A. Purely left-handed current effects The new physics contributions to 0νββ decay arising from purely left-handed currents due to exchange of keV scale right-handed neutrinos results in the following effective mass parameter, here M i is in keV range and V νN is the light-heavy neutrino mixing. Since V νN ∝ v 2L /v 2R ≈ 0.01, the effective mass parameter for 0νββ is estimated to be m N ee,LL = (0.01) 2 · 10 3 eV which is of the order of 0.1 eV, saturating the KamLAND-Zen [30] bound. 7 are given by (31) With M W L ≈ 80.4 GeV, M W R ≈ 4 TeV and V νN 0.01, the effective mass parameters due to these λ diagrams are found to be around sub-eV which can translated to a lifetime of 10 26 yrs. This value is very close to experimental bound for Xe isotope. It is interesting that, such large observable heavy-light neutrino mixing arises naturally in the model without any fine-tuning of the Yukawa couplings involved. Although we consider one contribution at a time in the above discussions, in general one has to include all the contributions and at the same time keeping the light neutrino mass and mixing in the allowed range. We intend to perform a detailed study, considering the most general neutrino mass formula M ν = M L + M I ν to an upcoming work. V. SYMMETRY BREAKING PATTERN Depending on the scale of different vev's mentioned above, the gauge symmetry of the model can be broken down to that of the Standard Model through several possible symmetry breaking chains. They are summarized pictorially in figure 8. The relevant scalar potential for the model can be written as In the scalar potential written above, the discrete leftright symmetry is assumed which ensures the equality of left and right sector couplings. However, as shown in earlier works [21] in the context of usual LRSM with universal seesaw that the scalar potential of such a model with exact discrete left-right symmetry is too restrictive and gives to either parity preserving (v L = v R ) solution or a solution with (v R = 0, v L = 0) at tree level. While the first one is not phenomenologically acceptable the latter solution can be acceptable if a non-zero vev v L = 0 can be generated through radiative corrections [32]. While it may naturally explain the smallness of v L compared to v R , it will constrain the parameter space significantly [32]. Another way of achieving a parity breaking vacuum is to consider softly broken discrete left-right symmetry by considering different mass terms for the left and right sector scalars [8,21]. As it was pointed out by the authors of [8], such a model which respects the discrete left-right symmetry everywhere except in the scalar mass terms, preserve the naturalness of the left-right symmetry in spite of radiative corrections. Another interesting way is to achieve parity breaking vacuum is to decouple the scale of parity breaking and gauge symmetry breaking by introducing a parity odd singlet scalar [33]. While we do not perform a detailed analysis of different possible symmetry breaking chains and their constraints on the parameter space of the model, we outline them pictorially in the cartoon shown in figure 8. As can be seen from figure 8, there are seven different symmetry breaking chains through which the gauge symmetry of the model SU (3) c × SU (3) L × SU (3) R × U (1) X can be broken down to that of the SM as summarized below. • Three step breaking: This is possible in three different ways when the vev's satisfy ω R . One can have both the usual LRSM or 3221 or asymmetric LRSM (3321 or 3231) or the usual 331 model as an intermediate stage. All these different symmetry breaking chains can not only provide a different phase transition history in cosmology but also give rise to different particle spectra including gauge bosons as well as neutral fermions which could be tested in different experiments. VI. CONCLUSION We have demonstrated a class of left-right symmetric model with extended gauge group SU (3) c × SU (3) L × SU (3) R × U (1) X with a universal seesaw mechanism for fermion masses and mixing and the implications for neutrinoless double beta (0νββ) decay. The novel feature of the model is that masses and mixing for left-handed and right-handed neutrinos are exactly determined by oscillation parameters and lightest neutrino mass. This forces the heavy neutrino masses to lie in keV regime if the W R mass is fixed at a few TeV. We show that for such a case, the heavy neutrino contribution to 0νββ remains suppressed compared to the usual light neutrino contribution. We also show that for such a TeV scale model, the heavy-light neutrino mixing can be quite large and can contribute substantially to 0νββ diagrams, keeping it within experimental reach. In the end we have discussed the scalar potential and possible symmetry breaking patterns that can be allowed for spontaneous breaking of the 3331 gauge symmetry to that of the standard model.
4,695.6
2017-01-30T00:00:00.000
[ "Physics" ]
Building Information Modeling and Artificial Intelligence Based Smart Construction Management: Materials and Electrical : With the development of society and technological progress, the requirements of government regulatory departments for engineering construction efficiency, quality, and safety are constantly increasing. The traditional extensive construction process can no longer meet the requirements of modern construction industry development. Based on the shortcomings of traditional construction processes, the concept of intelligent construction has been introduced. The construction of new smart and digital twin (DT) cities is entering an explosive period. The application of building rapid modeling technology based on artificial intelligence (AI) and building information modeling (BIM) integration in smart cities has gradually begun new explorations and attempts, and its application value is becoming increasingly prominent. A brand-new auto-machine learning (auto-ML) integrated algorithm technology platform for 3D building modeling is being developed and improved over time by combining Suggested Citation Introduction The traditional crucial construction process is affected by issues such as inadequate construction and monitoring efficiency, as well as insufficient management of construction quality and safety (Shah, 2022).Due to the ongoing enhancement of efficiency, quality, and safety standards set by government regulatory authorities for engineering construction, old construction methods are no longer able to match these requirements.Thus, the term intelligent building has been introduced.The development of smart cities is currently experiencing rapid expansion, and urban DTs (Shoukat, 2023), which are essential for the construction of smart cities (Niaz, 2022), are encountering challenges such as diverse urban structures, large data volumes, and incomplete information in management procedures. Smart cities have developed as a guiding principle for sustainable, efficient, and technologically sophisticated urban development in the ever-changing urban context of urbanization.The incorporation of advanced technology is key to this evolution, and one particularly transformational invention is the utilization of DTs in building construction (Omrany, 2023).A "DT" is a virtual model of a physical object or system (Shoukat, 2022).Its use in the development of smart cities is revolutionizing the methods we use to strategize, create, build, and oversee urban infrastructure.The construction process incorporates practical engineering cases and utilizes various technologies such as BIM, Internet of Things (IoT), AI, DT, cloud computing, and mobile communication (Raza, 2021).These technologies are used to create project information models for building projects on a 3D design platform. The building expenses associated with DT cities are consistently escalating due to their low efficacy, lengthy development cycles, and challenging updating processes.The IoT hardware is employed to regularly gather data from different aspects of the project.Simultaneously, AI is utilized to intelligently analyze the collected data, enabling an efficient mode of collaborative interconnection and intelligent analysis of various elements in the construction process. The implementation of urban DTs heavily relies on manual processing, making it challenging to ensure quality and keep to standards.Additionally, there are numerous obstacles to achieving effective collaborative management across different specialties. This article focuses on the auto-ML algorithm technology platform, which combines industry AI and BIM to facilitate quick modelling.It examines and analyses the various ways in which it might be applied and expanded upon in the context of smart cities.In order to achieve collaborative interconnection and intelligent analysis of various elements during the construction process, BIM, IoT, DT, and AI technologies must be utilized.The characteristics of the above three technologies and their interdependence are summarized as follows: BIM Technology The establishment of a virtual building model, which offers a single, comprehensive, and logically related building information collection, is the fundamental component of BIM (Tang, 2019).Not only does this information collection provide geometric information, professional qualities, and status information that describe building components, but it also contains status information of objects that are not comprised of components. It is possible to incorporate knowledge ranging from building design and construction to operation and maintenance into the information library described above, which will finally result in the realization of the construction unit.In order to improve job efficiency and cut expenses, it is important to have effective coordination between staff from the facility operating department, owners, and other parties.The BIM can be divided down into three main categories, which are technology, procedure, and policy, as shown in Figure 1. The collaboration of these three fields results in the formation of a framework that allows for the digital management of building data throughout the design and construction phases of architecture. IoT Technology In order to achieve intelligent identification, positioning, tracking, monitoring, and management of items, the IoT is a network that makes use of information sensing devices such as radio frequency identification, infrared sensors, global positioning systems, and video capture.These devices are used to connect any item to the internet in accordance with agreedupon protocols for information exchange and communication. DT Technology The DT concept functions based on the premise of connection, utilizing the IoT, sensors, and sophisticated data analytics.The interconnection allows for a constant exchange of information between the physical and digital worlds, giving stakeholders unparalleled knowledge about the current state, performance, and effectiveness of buildings in real-time (Shoukat, 2022).Consequently, those in positions of authority can make well-informed decisions, efficiently allocate resources, and aggressively tackle obstacles, ultimately promoting the growth of intelligent, more adaptable urban areas.DT model of a smart city as shown in Figure 2. AI Technology With its learning, adaption, and autonomous decision-making, AI will transform the building business.AI technologies are used in design, planning, building, operation, and maintenance to improve processes, resource management, and urban structure intelligence.AI's ability to analyze large datasets and draw conclusions is crucial to smart city building construction.Machine learning algorithms, a subset of AI, help systems recognize patterns, forecast events, and learn from real-world data.This skill helps optimize building timetables, save costs, and sustain urban projects (Nawaz, 2022). There is a clear indication that IoT technology possesses the capabilities of low-level information perception, collecting, transmission, and monitoring, as demonstrated by the analysis of the characteristics of BIM, IoT, DT, and AI technologies that were discussed before. In the context of commercial requirements, AI technology shines in the intelligent analysis and processing of information data.Integration of information, contact with users, display of information, and management are all upper-level features that BIM technology possesses.It is possible to produce a "closed-loop information flow" throughout the construction process by integrating the three technologies mentioned above.This has a tremendous amount of application value in the engineering construction process. Architecture of Smart Construction The overall architecture of the intelligent construction platform is divided into four layers: sensor layer, network layer, platform layer, and application layer, as shown in Figure 3. The specific functions of each layer are as follows. (1) Perception layer: composed of basic sensing devices (such as various sensors, cameras, GPS and other devices) and a network of sensors (such as sensor networks).Its function is to identify objects and collect attribute information such as the status or location of monitored objects. (2) Network layer: responsible for transmitting the information collected by the perception layer to the cloud server deployed by the smart construction platform, and also responsible for transmitting the commands issued by the smart construction platform to various sensor devices in the perception layer.The network layer mainly transmits a large amount of information through the Internet of Things, the Internet, and mobile communication networks. (3) Platform layer: responsible for solving the data analysis and processing involved in the business processing process, as well as managing various sensing devices involved in the perception layer. (4) Application layer: Utilize the functions provided by the platform layer to complete specific user applications, such as real-time display of monitoring results, display of construction site control and monitoring data analysis, etc. Results and Discussion The rapid building modeling technology based on the integration of AI and BIM is based on AI algorithms to analyze 2D (architectural design drawings: CAD) and form building data packages for three-dimensional DT model construction (Paduano, 2023).It supports the generation of architectural, structural (including steel structures), mechanical and electrical specialties, and can quickly generate BIM to restore building spatial information.BIM models have high generation efficiency and accuracy, supporting seamless integration with mainstream BIM software and platforms, and are widely used in multiple scenarios such as digital construction and intelligent management of buildings, parks, communities, and cities. The auto-ML algorithm platform system's business architecture is organized into six layers: device, communication, cloud foundation, data, platform, and application.This allows for quick modelling technology based on AI, DT, and BIM fusion.Devices for collecting data, such as those for scanning the indoor and outdoor environments, the human body, and other objects, make up the bulk of the device layer; IoT, wireless networks, 4G/5G, and fiber optic networks are the major components of the communication layer, which guarantees steady and seamless data transmission while the system is operating. The primary components of the cloud infrastructure layer, which are in charge of allocating, scheduling, using, and managing cloud resources, are resource management scheduling systems and virtualization hosts.Information from BIM, 2D-CAD drawings, databases, the IoT, and other sources make up the data layer.At its heart is the platform layer, which houses essential components such as algorithms for AI and BIM 3D building automatic high-precision modelling, XR fusion, big data construction using auto-ML, geographic information system (GIS), and so on (Lin, 2023).The application layer sits atop the hierarchy, and some of the current use cases include smart construction sites, buildings, communities, venues, firefighters, exhibitions, etc.According to the actual needs of the project, the functions of the deep foundation pit intelligent construction platform.It mainly includes seven aspects, as shown in Figure 4. Personnel tracking The project department keeps track of all the employees and managers involved in the project, creates a personnel ledger that includes biometric and positioning devices, and updates the smart construction platform every day with details on the kinds, numbers, and whereabouts of all the workers and managers that arrive at the site. With this data in hand, project managers are able to accomplish the following: using data collected from employees at the location to develop an automated attendance system; If the quantity of workers on site turns out to be different from what was anticipated, make the necessary adjustments to the schedule; A warning prompt will be automatically issued by the platform whenever personnel enter the indicated dangerous area, ensuring their safety. Video monitoring Locate strategic video monitoring spots on the building site and assign them to appropriate locations in the building information model.Connect various units, including the project construction party, the construction party, and the supervision party, using the smart construction platform's video monitoring center function.At any given moment, supervisors from any involved party can use smart mobile terminals or personal computers to see what's happening on-site, learn about any problems that have arisen, and check on the monitoring equipment's security. Quality control Quality control is comprised of two components: quality inspection and quality acceptance.During the daily quality inspection process, management personnel create a quality inspection plan by considering the management system and the specific circumstances of the project.They capture images of any quality issues identified during the inspection, along with accompanying text explanations.These images and explanations are then uploaded to the BIM platform, allowing for a connection between the quality problems and the BIM model.This enables traceability management. Each subcontracting unit is required to individually address and resolve every matter in accordance with their respective responsibilities, ultimately establishing a comprehensive quality management system.Upon the completion of sub projects, the subcontracting unit has the ability to submit an acceptance application on the smart construction platform for the quality acceptance process.The quality manager will then proceed with the quality acceptance based on this application.In the event of acceptance failure, a notice of non-conformity will be given.Upon repair and review, the acceptance procedure will be concluded by resolving the warning.The complete acceptance process will generate records on the platform, offering fundamental data for the subsequent traceability of operational and maintenance quality. Foundation monitoring Utilize the IoTs to regularly monitor various monitoring projects in deep foundation, and use DT and AI to intelligently analyze the collected monitoring data, in order to guide construction by auto-ML algorithm and timely detect safety risks, and avoid accidents. Environmental monitoring The smart construction platform allows for the transmission of real-time data from environmental monitoring sensors, which include site-specific metrics like temperature, wind speed, humidity, and dust.This allows for the sharing of various metrics related to environmental monitoring and helps managers gain a timely understanding of the environmental situation on-site.The system is designed to automatically activate the sprinkler system in order to improve the construction site environment if the concentration of pollutants or particles exceeds the alert value. Progress tracking In order to track how far along a project is in its development, it is possible to connect the BIM model to data about the building's progress and then modify the model's attributes to more accurately reflect the real-time status of the project.Smart construction platforms also allow for the comparison of actual progress to projected progress, allowing management to swiftly and effectively address any significant deviations in development. Engineering quantity statistics It is possible to obtain the required quantity of various materials for construction at various time nodes of the project by analyzing and calculating the BIM model through auto-ML.These materials include concrete, steel, and other materials.This assists in the timely procurement of relevant materials and ensures that the construction of the project proceeds without any hiccups. Conclusion The development of smart cities in all over the world is currently observing a rapid and significant development.The digital economy is gradually improving as the digital-world strategy is being implemented.BIM, IoT, DT, and AI are the primary driving forces behind the development of the digital economy.We can solve the urban digitization problem in smart city development by actively exploring and innovating technologies like AI and BIM, advancing ML, expanding application scenarios and boundaries, and incorporating upcoming technologies like 5G/6G, edge cloud, industrial IoT, blockchain, quantum communication, etc. Case projects that used a deep foundation construction approach based on BIM, IoT, DT, and AI saw significant improvements in construction efficiency, implementation progress, risk control, and quality.Integrating information technology, particularly BIM, IoT, DT, and AI into the construction process organically is currently a major trend in the industry's development; this trend will provide the groundwork for environmentally friendly, safe, and efficient building practices.They offer extensive opportunities for growth, promising possibilities, and demanding responsibilities in the context of smart cities.This approach will help overcome crucial technological obstacles, generate value, introduce innovative methods, and establish models. Figure Figure 1.The Three Main Fields inBuilding Information Modeling Figure 2. A Digital Twin Model for Building Structure in a Smart City Figure 3 . Figure 3. Overall Architecture of a Smart Construction Platform for Deep Foundation Pits
3,407.6
2023-11-01T00:00:00.000
[ "Engineering", "Computer Science", "Materials Science", "Environmental Science" ]
A Series Solution for the Vibration of Mindlin Rectangular Plates with Elastic Point Supports around the Edges A series solution for the transverse vibration of Mindlin rectangular plates with elastic point supports around the edges is studied. The series solution for the problem is obtained using improved Fourier series method, in which the vibration displacements and the cross-sectional rotations of the midplane are represented by a double Fourier cosine series and four supplementary functions. The supplementary functions are expressed as the combination of trigonometric functions and a single cosine series expansion and are introduced to remove the potential discontinuities associated with the original admissible functions along the edges when they are viewed as periodic functions defined over the entire x-y plane.This series solution is approximately accurate in the sense that it explicitly satisfies, to any specified accuracy, both the governing equations and the boundary conditions.The convergence, accuracy, stability, and efficiency of the proposed method have been examined through a series of numerical examples. Some numerical examples about the nondimensional frequency andmode shapes of Mindlin rectangular plates with different point-supported edge conditions are given. Introduction Rectangular plate is of great importance in various engineering branches, such as aerospace, electronics, and mechanical, nuclear, and marine engineering.A better understanding of its dynamic characteristics is meaningful when dealing with the design of a plate-type structure.In the early stage of research about rectangular plate, the majority of research was about thin plate based on classical Kirchhoff hypothesis.However, for the shortage of consideration about transverse shear deformation and rotary inertia, vibration frequencies of thick plates are overestimated.More and more investigations about Mindlin plates began to attract attention after the Mindlin first-order plate theory was proposed by Mindlin [1].The vibration characteristics of Mindlin plates have been well investigated by researches with classical and elastic edge support [2][3][4][5].Few literatures focused on the vibration behavior of Mindlin plates with elastic point supports around the edges.However, the boundary conditions of the plates are not always classical and elastic around the edges in practical engineering applications.And the boundary conditions of elastic point supports around the edges do exist.So it is much of great significance to study the vibration behavior of Mindlin plates with elastic point supports around the edges. In the field of vibration analysis of structures subject to elastic edge/point supports, Takashi and Jin [6] used the collocation method to investigate Mindlin plates with constant thickness and two opposite edges simply supported.Jiarang and Jianqiao [7] established the three-dimensional state equation for the laminated thick orthotropic plate with simply supported edges and obtained the numerical results.Ohya et al. [3] investigated the free vibration characteristics of rectangular Mindlin plates which have simultaneous elastic edge and internal supports via the superposition method.Hosseini-Hashemi et al. [8] proposed an exact closed-form procedure to solve free vibration of moderately thick rectangular plates with two opposite edges simply supported.Dozio et al. [9] developed a Ritz method using a set of trigonometric functions to obtain accurate modal properties of rectangular plates with arbitrary thickness.Maretic [10] analyzed the transverse vibration of a circular plate loaded by uniform pressure along its edge.Based on the shear deformable plate theory, Bahmyari [11] used the Element-Free Galerkin Method to analyze the free vibration of inhomogeneous moderately thick plates with point supports resting on a twoparameter elastic foundation.The results show that the vibration results obtained are in a very good agreement with the available literatures in spite of using low numbers of nodes.Bashmal et al. [12] investigated the in-plane free vibrations of an elastic and isotropic annular disk with elastic constraints at the inner and outer boundaries.And the inner and outer boundaries are applied either along the entire periphery of the disk or at a point.Esendemir [13] studied polymer-matrix composite beam of arbitrary orientation supported from two ends acted upon with a force at the midpoint by an analytical elastoplastic stress analysis.Foyouzat et al. [14] presented an analytical-numerical approach to determine the dynamic response of thin plates resting on multiple elastic point supports with time-varying stiffness.Gan et al. [15] studied the effect of intermediate elastic support on the vibration of functionally graded Euler-Bernoulli beams excited by a moving point load.Kucuk et al. [16] carried out the analytical elastic-plastic stress analysis on an aluminum metal-matrix composite beam reinforced by unidirectional steel fibers supported at the ends acted upon with a force at the midpoint.Hosseini-Hashemi et al. [17][18][19] used the generalized differential quadrature method to study the buckling analysis and dynamic transverse vibration characteristics.Setoodeh and Karami [20] employed a three-dimensional elasticity based layerwise finite element method (FEM) to study the static, free vibration, and buckling responses of general laminated thick composite plates.In this paper, the elastic line and point supports are successfully incorporated for thick plates.On the basis of three-dimensional elasticity theory, Xu and Zhou [21] studied the bending of simple-supported rectangular plate on point supports, line supports, and elastic foundation.Wu [22] analyzed the free vibration of arbitrary quadrilateral thick plates with internal columns and elastic edge supports by using the powerful pb-2 Ritz method and Reddy's third-order shear deformation plate theory.Wu and Lu [23] studied the free vibration of rectangular plates with internal columns and elastic edge supports using the powerful pb-2 Ritz method.Other literatures related to this field are shown in [24][25][26][27][28]. From the literatures review, we can know that most researches focused on the classical plates.Only few literatures studied the vibration characteristics of Mindlin plates with point supports around the edges.And these literatures mainly used the collocation method, analytical-numerical method, and so on.To the best of the authors' knowledge, there are no published literatures focused on vibration characteristics of Mindlin plates with point supports around the edges by the method of modified Fourier.Thus, a unified, efficient, and accurate formulation to deal with the free vibration of Mindlin plates subjected to arbitrary point-supported boundary condition is necessary and much of great significance. In this paper, a modified Fourier solution for the free vibration of Mindlin rectangular plates with elastic point supports around the edges is proposed.The vibration displacements and the cross-sectional rotations of the midplane are represented by a double Fourier cosine series and four supplementary functions.The supplementary functions are in the form of the combination between trigonometric Theoretical Formulations 2.1.Point-Supported Edge Conditions.The rectangular Mindlin plate with arbitrary elastic point edge supports is shown in Figure 1.The boundary conditions are presented by three kinds of restraining springs [29][30][31][32][33][34], namely, translational, rotational, and torsional springs.Springs are evenly arranged on each edge of Mindlin plate.Through changing the stiffness of springs, different boundary conditions can be achieved [35,36].The governing differential equations of a Mindlin plate about its free vibration are as follows: In the formulas above, transverse displacement is presented by , is the slope along direction, and is the slope along direction. means the mass density, is Poisson's ratio, is the shear correction factor, and the thickness of the plate is ℎ. = /(2(1 + )) is the shear modulus, is Poisson's ratio, and = ℎ 3 /(12(1 − 2 )) is the flexural rigidity. Besides, the bending moment is expressed by ( 4) and ( 5), twisting moment is calculated by (6), and the transverse shearing forces in plates are expressed by (7) and (8). There are three kinds of forces along every edge, namely, the bending moment, the twisting moment, and the shearing forces.Rotational, torsional, and translational springs along every edge are the counterparts to these three forces in this paper.The boundary conditions for an elastically restrained rectangular plate are as follows. At = 0, 0 () = − (9) 0 (), 0 (), and 0 () represent the stiffness of the translation spring, rotational spring, and torsional spring, respectively, on the boundary edge = 0; the corresponding expression results in where is the elastic point supports numbers on edge = 0, () is the Dirac delta function, and 0 , 0 , and 0 are, respectively, the stiffness of the translational, rotational, and torsional springs at the th support located at 0 .Similarly, the remaining boundary conditions on the other three edges can be expressed as follows. At = 0, At = 0, At = , According to the Mindlin plate theory, the transverse displacement of the plate middle surface and the rotations of the cross section, respectively, along the direction and the direction are utilized.Based on the traditional solution method, the admissible functions usually express a Fourier series expansion, because the Fourier functions constitute a complete set and exhibit an excellent numerical stability in the previous study [37][38][39][40][41][42][43].We found the conventional Fourier series expression to have some defects which contain the convergence problem along the boundary edges except for a few simple boundary conditions, and the derivatives of a Fourier series cannot be obtained simply through term-byterm differentiation. In this study, in order to overcome the shortcoming with the conventional Fourier series expression, the admissible functions will be expressed as a more wholesome form of Fourier series expansion: where , , , , , , , , and represent the unknown coefficients, = /, = /.The specific expressions of the auxiliary functions and are defined as As shown in ( 45)-( 47), the supplementary functions 1 , 2 , 1 , and 2 are used for the and direction displacement expressions.It is easy to find that Similar conditions are present in the supplementary function in -direction, namely, 1 () and 2 ().Whereas these conditions are not necessary, the existence of these conditions is purely for the sake of simplifying follow-up mathematical expressions and derivation process. Examining the admissible functions in (45)-( 47), one will notice that, besides the standard 2D Fourier series defined over the domain R 2 : ((0, ) ⊗ (0, )), four additional 1D Fourier expansions are also included in the admissible functions expressions.In light of (52), it is not difficult to see that the fourth term (or the third single Fourier series) on the right side of (45) is equal to the normal derivative of the displacement function (, ) at edge = 0. Thus, this term will actually inherit any potential discontinuity associated with the normal derivative at the boundary = 0. Similarly, the other three terms are used to take care of the possible discontinuities at the remaining edges.Thus, the 2D Fourier expansion will now represent a periodic Π 2 (residual displacement) function defined over the entire - plane.The convergence characteristic for the Fourier series representation of such a function has been well established in mathematics.Namely, the Fourier series will converge with a speed of () 3 at least. By substituting ( 45)-(47) into boundary condition ( 33) and expanding all the -related terms, except for those containing 0 (), into Fourier cosine series, one will have The expressions for Fourier cosine expansion coefficient of related functions can be found in Appendix A. To establish the relationship between the Fourier coefficients in (54), the terms on the left side shall also be expressed in the form of cosine series, resulting in where 0 and 0 are Kronecker delta.Using similar procedures, the other seven boundary condition equations can be obtained.Thus, a total of twelve constraint equations can be derived as follows. Solving Governing Differential Equations. By substituting (45)-(47) into governing differential equations ( 1)-( 3) and then expanding all the resulting equations and sine functions into cosine series and comparing the like terms, the following equations in matrix form will be deduced: where and The elements in the matrices , , , and can be directly obtained from the results of substitution process.The coefficients in cannot be treated as independent variables and need to be eliminated from (74) by making use of (70).By doing so, the final characteristic equation can be written as where = + −1 and = + −1 , and the modal frequencies and the corresponding eigenvectors can now be easily determined by solving a standard matrix eigenproblem.The elements in each eigenvector are actually the Fourier coefficients for the corresponding mode whose mode shape in the physical space can be simply calculated using (45), ( 46), (47), and (70). Numerical Examples and Discussion In this section, some results and discussions about the free vibration of Mindlin rectangular plates are presented to verify the accuracy and flexibility of the proposed method.Based on that, some new results are obtained for Mindlin rectangular plates with various aspect ratios and thicknesses subjected to different point-supported conditions.The discussion is arranged as follows.Firstly, the convergence of the present method is checked when the plate is square rectangular and point-supported.Then, the accuracy of this method is compared with those obtained by FEM commercial program ABAQUS (S4R model).And the following results obtained by FEM are all based on the commercial program ABAQUS (S4R model) unless otherwise stated.In addition, pointsupported plates with different geometry parameters and various thicknesses are calculated by the present method. Then, the method is extended to multipoint-supported plate. After the convergence of the multipoint-supported plate is verified, more multipoint-supported plates with different geometry parameters and number of clamped points are calculated through this method. Convergence Studies. A Mindlin plate with arbitrary elastic point edge supports is shown in Figure 1.The first example is about point-supported square rectangular plate, namely, case 1 in Figure 2. In Table 1, the first eight nondimensional frequency parameters Ω = 2 (ℎ/) 1/2 / 2 for the square plate of case 1 are given with different restraining stiffness ( = = , = 0) for each point support, and the results are compared with the FEA results.FEA results of Tables 1 and 2 are both obtained by FEM commercial program ABAQUS (S4R model, 18720 elements). The geometric dimensions of case 1 are as follows: = = 1 m, ℎ = 0.1 m.Besides, series expansion truncated number = = 15.It is noted that the current results compare well with the FEA results.As shown in Table 1, when nondimensional springs parameters of translational and rotational springs are in the range of = = 10 7 ∼10 9 , nondimensional frequency parameter is of convergence.In other words, nondimensional springs parameters at this range can guarantee rigid boundary conditions.When nondimensional springs parameters are less than 10, the boundary conditions can be seen free.To understand the modes of square rectangular plate in free boundary conditions better, the first six mode shapes of square rectangular plates with completely free boundary conditions calculated by the present method and FEA are presented in Figures 3 and 4.And when the nondimensional springs parameters are in the range of 10 2 and 10 6 , the boundary conditions are elastic.As shown in Table 2, the first eight nondimensional frequency parameters become quickly converged at = = 10 for the given 5-digit precision.For simplicity, the displacement expansion will be truncated to = = 10 in all the subsequent calculations. To eliminate the particularity of square rectangular plate, the second example is about point-supported rectangular plate 1b in Figure 2. The geometric dimensions of case 2 are as follows: = 2 m, = 1 m, and ℎ = 0.1 m.Besides, series expansion truncated number = = 10.In Table 3, the first eight nondimensional frequency parameters about 1b calculated by the new method and FEA are given.Meanwhile, the comparison in Table 3 shows the accuracy of the new method to ordinary rectangular Mindlin plate.FEA results of plate 1b in Table 3 and Figure 8 are obtained by FEM commercial program ABAQUS (S4R model, 37500 elements). 3.2. Point-Supported Plate.The accuracy and nice convergence characteristic of this method have been proved when the plate is point-supported.In Table 4, the first eight nondimensional frequency parameters are given with different stiffness values, aspect ratios, and thickness ratios.The first six mode shapes of square rectangular plates with completely elastic boundary conditions ( = = 10 2 , = 0) calculated by the present method and FEA are presented in Figures 5 and 6.FEA results of plate 1a in Figure 6 are obtained by FEM commercial program ABAQUS (S4R model, 18720 elements).The first six mode shapes matched well between current method and FEA.It can be seen from Table 4 that nondimensional frequency parameters under elastic boundary conditions tend to decrease with the aspect ratio, whereas the first three modes in each case basically represent the rigid-body motions.The change of these three modes is smaller than higher modes.With the increase of nondimensional spring parameters, the boundary conditions change from elastic to clamped.When the spring stiffness is set as = = 10 8 , = 0, the first six mode shapes of 1a and 1b which are displayed in Figures 7 and 8 can be seen as point clamped. Multipoint-Supported Plate. It is proved that the present method is convergent and accurate when the plate is pointsupported from the previous study.In this subsection, nondimensional frequency parameters Ω = 2 (ℎ/) 1/2 / 2 of the multipoint-supported square plate 1a ( = 1 m, = 1 m, and ℎ = 0.1 m) with different numbers of terms in the series expansion ( and ) are shown in Table 5.The nondimensional spring parameters translational and rotational springs are set as 10 8 N/m and 10 8 N⋅m/rad, respectively; besides, there are 26 clamped points on every edge.This shows that the results are very accurate when and are small numbers.When truncated numbers and are larger than 10, results are almost invariant.The displacement expansion will be truncated to = = 10 in the subsequent calculations of multipoint-supported plate.For the lack of relevant literature, the FEM data is given as a comparison.The FEM data is obtained through ABAQUS (S4R); each edge is divided into 100 pieces, which is considered adequately fine to capture the spatial variations of these lower order modes.Besides, the mode shapes calculated through FEM are displayed in Figure 10.The first six mode shapes of square rectangular plates with 26 clamped points on every edge which are calculated by the present method are presented in Figure 9. Both nondimensional frequency and mode shapes of square rectangular plate matched well between the new method and FEA.The accuracy and nice convergence characteristic of this method have been proved when the plate is multipointsupported. In Table 6, the first eight nondimensional frequency parameters are given with different number of clamped points, aspect ratios, and thickness ratios.Similarly, nondimensional frequency parameters tend to decrease with the aspect ratio when the plate is multipoint-supported. Figure 11 shows the difference of nondimensional frequency parameters between different numbers of clamped points and clamped boundary condition, in which three kinds of aspect ratios and thickness ratios are considered.The first three frequency parameters of multipoints plate and clamped plate are, respectively, calculated by the current method and ABAQUS.With the increase of the number of clamped points on each edge, the percentage error between multipoints plate and clamped plate narrows quickly.When the number of clamped points is bigger than 16 at each edge, the discrepancy between nondimensional frequency parameters of multipoints plate and clamped plate is tiny and invariant.The maximum percentage error is 0.31%, which means multipoints boundary condition is equal to clamped boundary condition. Based on different plate theories, the first frequency and error of multipoint clamped square plates with different thickness ratios and number of clamped points are listed in Figure 12.The error is defined as (%) = |Ω − Ω 3 |/Ω 3 ×100%, in which represents classical plate theory, Mindlin plate theory, and three-dimensional (3D) elastic theory, respectively.It is easy to find that classical thin plate theory overestimates the plate frequencies for the neglect of transverse shear and rotary inertia.The discrepancy of thin plates using Mindlin and classical plate theory is tiny to neglect when the thickness ratio is less than 0.05.However, the error of classical plate theory increases rapidly when the thickness ratio is more than 0.1 and is generally more than 10%.Meanwhile, the error of Mindlin plate theory is only 1.1% when the thickness ratio is 0.2.The introduction of Mindlin theory is necessary when the plate is not a thin plate (thickness ratio is less than 0.1). Conclusions In this paper, a modified Fourier method has been presented to study the free vibration behaviors of moderately thick rectangular plates with different point-supported conditions.The first-order shear deformation plate theory is adopted to formulate the theoretical model.The displacement and rotation fields of plates, regardless of boundary conditions, are generally sought as a new form of trigonometric series expansions in which several supplementary closed functions are introduced to ensure and accelerate the convergence of the series expansion.Not only is the series expansion representation of solution applicable to any boundary conditions, but also the convergence of the series expansion can be substantially improved.Rayleigh-Ritz method is employed to obtain solution by the energy description of the plates.The convergence of the present solution is examined and the excellent accuracy is validated by comparison with FEM data.Excellent agreements are obtained from these comparisons.The proposed method provides a unified means for extracting the modal parameters and predicting the vibration behaviors of moderately thick plates with arbitrary point-supported edge restraints.A variety of free vibration results for moderately thick rectangular plates with different aspect ratios, thickness ratios, and boundary conditions are presented.From the results in this paper, we can find that nondimensional frequency tends to decrease with the aspect ratio, whereas the change of these three modes is smaller than higher modes under elastic boundary conditions.In addition, with the increase of the number of clamped points on edge of rectangular plate, the boundary condition converges to fully clamped edge condition.Finally, we have verified that classical thin plate theory overestimates the plate frequencies for the neglect of transverse shear and rotary inertia.It is necessary to introduce Mindlin theory when considering the vibration of moderately thick plate. Figure 1 : Figure 1: A Mindlin plate with arbitrary elastic point edge supports. Figure 2 : Figure 2: Two-case elastic point support arrangement considered in the calculation. Figure 3 : Figure 3: The first six mode shapes of square rectangular plates with completely free boundary conditions calculated by the present method. (a) The 1st mode (b) The 2nd mode (c) The 3rd mode (d) The 4th mode (e) The 5th mode (f) The 6th mode Figure 4 : Figure 4: The first six mode shapes of square rectangular plates with completely free boundary conditions calculated by FEA method. Figure 5 : Figure 5: The first six mode shapes of square rectangular plates with completely elastic boundary conditions, = = 10 2 , = 0, calculated by the present method. Figure 6 : Figure 6: The first six mode shapes of square rectangular plates with completely elastic boundary conditions, = = 10 2 , = 0, calculated by FEA method. (a) The 1st mode (b) The 2nd mode (c) The 3rd mode (d) The 4th mode (e) The 5th mode (f) The 6th mode Figure 7 : Figure 7: The first six mode shapes of square rectangular plates with completely elastic boundary conditions, = = 10 8 , = 0, calculated by the present method. (a) The 1st mode (b) The 2nd mode (c) The 3rd mode (d) The 4th mode (e) The 5th mode (f) The 6th mode Figure 8 : Figure 8: The first six mode shapes of rectangular plates with completely elastic boundary conditions and the aspect ratio as / = 2, ℎ/ = 0.1, = = 10 8 , and = 0, calculated by the present method. Figure 10 : Figure 10: The first six mode shapes of square rectangular plates with uniform completely clamped boundary conditions, calculated by FEA method. 2 Figure 11 : Figure 11: The discrepancy of nondimensional frequency parameters between different numbers of clamped points and clamped boundary condition ( = 1 m).
5,478.2
2018-06-28T00:00:00.000
[ "Engineering" ]
Bioremediation of Heavy Metals by the Genus Bacillus Environmental contamination with heavy metals is one of the major problems caused by human activity. Bioremediation is an effective and eco-friendly approach that can reduce heavy metal contamination in the environment. Bioremediation agents include bacteria of the genus Bacillus, among others. The best-described species in terms of the bioremediation potential of Bacillus spp. Are B. subtilis, B. cereus, or B. thuringiensis. This bacterial genus has several bioremediation strategies, including biosorption, extracellular polymeric substance (EPS)-mediated biosorption, bioaccumulation, or bioprecipitation. Due to the above-mentioned strategies, Bacillus spp. strains can reduce the amounts of metals such as lead, cadmium, mercury, chromium, arsenic or nickel in the environment. Moreover, strains of the genus Bacillus can also assist phytoremediation by stimulating plant growth and bioaccumulation of heavy metals in the soil. Therefore, Bacillus spp. is one of the best sustainable solutions for reducing heavy metals from various environments, especially soil. Introduction Heavy metals are a collection of metals and semi-metals, characterized by high density and usually toxic properties [1][2][3][4][5]. Of all the heavy metals found in the environment, dangerously increased amounts of Cd and Pb are the biggest concern, i.e., ballast elements that are completely unnecessary for living organisms [1]. These metals enter the biological cycle to the greatest extent through crops, taking up metals from soils [1,[6][7][8]. Studies have shown that these elements caused changes in the cell cycle, carcinogenesis, or apoptosis [9]. In 2015, the United Nations (UN) set a key sustainable development objective to reduce diseases and deaths associated with soil contamination by 2030 [10]. To achieve this goal, it is necessary to seek sustainable methods for remediating heavy metals in soil [11]. There are several conventional techniques to remove heavy metals including chemical precipitation, oxidation or reduction, filtration, ion exchange, reverse osmosis, membrane technology, evaporation, and electrochemical treatment. However, most of these techniques are becoming ineffective [8,12,13]. Metals in soils form such stable compounds, that natural removal processes are unable to remove them [3,4,6,14,15]. Therefore, it is extremely difficult to reduce the influx of these toxic compounds into the human body [1]. Thanks to modern research, the now-improved bioremediation methods using suitable microbial species (that can act alone or support the action of hyperaccumulators) are becoming more common in environmental protection [8,16]. Thus, the review aims to summarize the current knowledge of the possibility of using Bacillus spp. directly in soil bioremediation, and in supporting phytoremediation, a technology that uses higher plants in the environmental clean-up processes. Scale of Heavy Metal Contamination Heavy metals present in the environment are of various origins. They can be natural processes, such as rock weathering, volcanic eruption, forest fires, or soil-forming processes. However, the most significant sources of heavy metal contamination are anthropogenic processes [1,2,15]. Metals have been, and continue to be, important raw materials for economic development [3,5]. Since the early Middle Ages, numerous mines have been established in Europe in areas where metal ores were shallow, i.e., silver and lead, gold, arsenic and gold, copper, tin and iron [38]. Nevertheless, their extraction and processing contribute to strong local contamination of the environment, especially the soil [5,6]. Particularly high concentrations of metals are associated with waste resulting from the historical processing of sulfide metal ores [5]. As mentioned previously, nowadays, metals have application in numerous industries and the volume of emissions resulting from their processing strongly varies. Currently, the dominant source of atmospheric emissions of most heavy metals is the stationary combustion of solid and liquid fuels in the power industry, accounting for more than a half of total emissions from anthropogenic sources [5,38,39]. In Europe, there are approximately 2.5 million sites potentially contaminated with heavy metals and organic pollutants [40]. For the US, between 235,000 and 355,000 sites require remediation [11]. Estimating the total degraded area globally is not easy. It is reported to range from less than 1 to more than 6 billion hectares, with widespread disagreement on spatial distribution [3]. In some cases, regulations on the permissible amounts of heavy metals in soil vary from country to country. For example: the highest permissible amount of Pb in soil in Romania is 50 mg kg −1 , while in the Netherlands it is 140 mg kg −1 . In contrast, for Cr, the maximum amount allowed in soil for both countries is the same: 100 mg kg −1 [2]. Results from the literature indicate that in many countries, the amounts of some heavy metals present in soil far exceed the permissible amounts. For example, in Iran, the amount of Pb in some contaminated soils has been measured at 57 mg kg −1 , while the maximum allowed amount is 25 mg kg −1 [41]. In China, amounts of Ni in soils have been measured in the range of 40-200 mg kg −1 , which was higher (up to three times) than the permissible amount of 60 mg kg −1 [42]. Similarly high levels can be observed in many other countries [43][44][45][46]. Impacts of Heavy Metal Pollution on Environment and Human Health A major problem with heavy metal contamination is that their ions are not biodegradable, which causes them to circulate in the environment. Hence, they may persist in the environment in a toxic form for at least 200 years [1,[3][4][5][6]14,15]. These soil-polluting compounds can inhibit the growth of soil microorganisms. They also lead to the disruption of the physiological functions of microorganisms, as well as disrupting processes related to the decomposition and transformation of the organic matter [47]. Disruption of the decomposition of organic matter by microorganisms can lead to an increase in the pool of bioavailable forms of metals in the soil. Importantly, the forms of heavy metals occurring in soil are one of the factors determining their mobility and toxicity in the environment [6,16,47]. In biological systems, heavy metals contribute to the interference of enzymatic processes, disruption of the function of subcellular structures and can cause damage through free radical processes, through physicochemical properties similar to metals that are physiologically active [5,45,48]. For instance, in the cell cytoplasm, metal ions readily bind to functional groups such as -SH, -OH, and -NH, which causes the protein molecule deformation and leads to a complete decrease in the biological activity of proteins, and consequently cell death [15,49]. The greatest risk of heavy metals results directly from their transport along the food chain and the phenomenon of bioaccumulation: the largest amounts of a substance are delivered to the last link, which is human ( Figure 1) [6,8,48,49]. The primary source of human health exposure to heavy metals is food, mainly of plant origin [6,7,49]. Therefore, the accumulation of heavy metals in crops for animal feed and direct human consumption should be limited [6,49]. In the human body, heavy metals can cause acute poisoning and chronic conditions. Most of the symptoms of heavy metal poisoning do not become apparent immediately after onset, but after many months or even years have passed [5,50,51]. The spectrum of the toxicity of heavy metals in the human body is very wide [5]. Despite a similar mechanism of toxicity, individual heavy metals often tend to affect different tissues and organs [5,49]. Lead accumulates primarily in bone tissue, cadmium in renal cortical tissue and liver, while mercury accumulates in the form of methylmercury compounds in brain tissue, which can lead to severe neurological changes [5][6][7]49,50]. Moreover, some heavy metals and their compounds are classified as confirmed or probable carcinogens by the International Agency for Research on Cancer (IARC) [5,6,51]. In terms of the environmental risk, two elements have ranked first for years: Cd and Pb. They are followed by As, Cr, Hg, and Zn [1]. According to the IARC study, Cd is classified in Group 1, which includes substances that are carcinogenic to humans, while inorganic Pb compounds are classified in Group 2A, which includes substances that are probably carcinogenic to humans [6,51,52]. Heavy Metal Bioremediation Strategies Detected in Bacillus Microorganisms can use several strategies to remove heavy metals present in the environment ( Figure 2) [8,20,53,54]. Biosorption, bioaccumulation, and bioprecipitation are the most common heavy metal removal strategies of the genus Bacillus [8,55]. Biosorption Biosorption is a physicochemical, metabolism-independent heavy metal uptake process based on cell membranes. It functions through compounds with a negative charge that are present in the cell membranes. Importantly, the biomass used for biosorption is usually non-living biomass, as this way, the process proceeds more efficiently than with living microorganisms. The efficiency of this strategy mainly depends on several parameters, including surface properties, e.g., functional groups present on the cell membrane, pH, temperature, or electrostatic interactions [12,57,58]. Understanding the biosorption mechanisms that enable the removal of heavy metals is crucial to optimizing the process. To date, several mechanisms occurring during the sorption process have been discovered, and different mechanisms can proceed at the same time at different rates. Among the biosorption mechanisms, the following mechanisms can be identified: (i) ion change, a reversible chemical reaction involving the exchange of ions for other ions of the same charge; (ii) complexation, heavy metal ions bind to functional groups present in cell membranes; (iii) physical adsorption caused by intermolecular interactions, including Van der Waals forces ( Figure 2) [57,59]. To date, several papers on biosorption with Bacillus spp. have been published [21,[60][61][62]. Some strains of the Bacillus spp. may also have the ability to bio-sorb a few different heavy metals. B. thuringiensis OSM29, isolated from the rhizosphere of cauliflower grown in soil irrigated with industrial effluent, was capable of remediating Cd, Cu, Cr, Ni, and Pb [63]. The biosorption capacity of B. thuringiensis OSM29 was highest for Ni (94%), while the lowest biosorption by the bacterial biomass was noted for Cd (87,0%). The researchers also observed that the biosorption efficiency was dependent on a few physicochemical parameters, such as pH, initial metal concentration, and contact time. For example, the optimum pH values for copper and lead biosorption efficiency was 6.0, while for Ni and Cr, it was 7.0. Additionally, using FTIR, the authors identified the following chemical functional groups in the studied strain: amino, carboxyl, hydroxyl, and carbonyl groups, involved in the sorption of heavy metals [63]. Nevertheless, most studies concern the biosorption of single heavy metals. Strains of the genus Bacillus are capable of Hg biosorption. For instance, Sinha et al. [64] analyzed a biosorption potential of immobilized B. cereus cells for bioremediation of mercury from synthetic effluent. Importantly, the experiment was conducted under various conditions. The maximum adsorption capacity of B. cereus (immobilized cells) was 104.1 mg g −1 (Hg 2+ ), and was noted for a pH of 7.0 at 30 • C, for a pH of 7.0 after 72 h from contact, and biomass concentration of 0.02 g L −1 . Moreover, the average free energy value calculated using the Dubinin-Radushkevich (D-R) model was 15.8 kJ mol −1 , indicating that this process was chemically more favorable than the physical adsorption process [64]. Chen et al. [65] conducted a study on Pb(II) biosorption, using the strain B. thuringiensis 016 through batch and microscopic experiments. The authors noted that the highest biosorption potential of Pb for B. thuringiensis 016 was approximately 165 mg g −1 (dry weight). Interestingly, this study showed that pH, amide, carboxyl, and phosphate functional groups of the studied strain (studied by fourier transform infrared (FTIR) analyses and selective passivation experiments) greatly affected the Pb biosorption. Furthermore, the observation by scanning electron microscopy proved that Pb precipitates had accumulated on the surfaces of the bacteria cells [65]. Moreover, Bacillus spp. strains are also capable of biosorption of less toxic metals than those above. For instance, B. cereus AUMC B52 was capable of Zn biosorbing. The maximum adsorption capacity of B. cereus AUMC B52 calculated from the Langmuir adsorption isotherm was 66.6 mg g −1 . In addition, the presence of amine, hydroxyl, carboxyl, and carbonyl groups, which are probably responsible for Zn(II) biosorption, was detected in the bacterial biomass using FTIR [66]. There are also examples of As biosorption by B. cereus, which is more often found in the environment in the form of anions. Giri et al. [67] detected an adsorption capacity of approximately 32 mg g −1 for arsenite at pH 7.5, at a biomass dose of 6 g L −1 . The ability to bioabsorb arsenic was also noted in B. thuringiensis WS3. The maximum As(III) adsorption capacity was approximately 11 mg g −1 , in the optimum As(III) removal conditions: 6 ppm As(III) concentration, pH 7, temperature 37 • C, and biomass dose of 0.50 mg ml −1 [68]. Bioremediation by Extracellular Polymeric Substances (EPS) Another important mechanism for bioremediation of heavy metals that many metaltolerant bacteria possess is the uptake of metals through the secretion of extracellular polymeric substances (EPS) (Figure 2) [58,69]. The EPS include compounds such as nucleic acids, humic acids, proteins, and polysaccharides, that bind cationic metals with varying degrees of specificity and affinity [58,70]. Their importance in the bioremediation process is based on their participation in the flocculation and the binding of metal ions from solutions [71]. Microorganisms that secrete exopolysaccharides are the most significant in the bioremediation of heavy metals [69]. Factors modulating the removal of metals by EPS include initial metal concentrations and pH [58]. To date, the ability to secrete EPS has been detected in several strains belonging to the genus Bacillus. For instance, multi-metal resistance (Pb, Cd, Cu, and Zn) strain, B. cereus KMS3-1, was able to produce EPS (optimum conditions: pH 7.0, 120 h incubation time, sucrose concentration 5 g L −1 , and 10 g L −1 yeast extract) [72]. Furthermore, optimization of EPS production, using a central composite design, revealed that the optimal sucrose and yeast extract concentrations for enhanced EPS production (8.9 g L −1 ), were 5 g L −1 and 30 g L −1 , respectively. In addition, using FTIR, thin layer chromatography (TLC), and high-performance liquid chromatography (HPLC) techniques, the researchers determined studied EPS as heteropolysaccharide, which consisted of glucose, mannose, xylose, and rhamnose [72]. However, there are also studies focusing on bioremediation using EPS against a single heavy metal contamination. Kalpan et al. [73] isolated an exopolysaccharideproducing bacteria, B. cereus VK1. Subsequently, EPS was purified, estimated, and further characterized by FTIR, gas chromatography, mass spectrometry (GC-MS), and thermo gravimetric analysis (TGA). Interestingly, using statistical modeling (response surface methodology, RSM), the researchers carried out media optimization to increase EPS production. The study results showed that B. cereus VK1 cultured in LB was capable of adsorbing up to 80.22 µg Hg 2+ in 20 min, while the strain grown in RSM-optimized medium adsorbed up to 295.53 µg Hg 2+ [73]. Moreover, the ability to produce EPS has also been detected in a species related to Bacillus spp., PaeniBacillus jamilae. EPS showed a notable affinity for Pb in comparison to the other five metals. The bioremediation of lead (303.03 mg g −1 ) was as much as ten times higher than the removal of the other metals. Alongside that, studied EPS consisted of glucose (the most abundant sugar), rhamnose, galactose, fucose, and mannose [74]. Bioaccumulation In contrast to the biosorption described above, bioaccumulation is a cellular energydependent process conducted by active metabolic microorganisms (Figure 2) [75]. Therefore, compared to biosorption, heavy metal uptake takes longer because it depends on the biochemical features, microbial internal structure, genetic and physiological ability, and environmental conditions affected by bioaccumulation activity [53,76]. Moreover, the bioaccumulation process was also found to be influenced by cell surface properties, including changes in charge. In addition, temperature also affects the bioaccumulation process: a higher temperature may significantly disrupt the metabolic activity of a bacterial cell [53,77]. The best-known mechanism of bioaccumulation is likely based on heavy metal binding using metallothioneins. Metallothioneins are cysteine-rich proteins (low molecular weight molecules, can be encoded by the bmtA gene that facilitates the bioaccumulation of heavy metals (e.g., Pb, Hg, Ni, Cd)) inside the cell [78]. Bacterial cells usually produce metallothioneins in reaction to enhanced exposure to metals [79,80]. This mechanism may be transferred by plasmids, facilitating its dispersion from one bacterial cell to another [81]. However, there are other bioaccumulation mechanisms that are often not universal for the bioremediation of all heavy metals. For instance, bioremediation of As in bacteria of the genus Bacillus is mediated by the ars operon through the use of the following genes: arsA, arsB, arsC, arsD, and arsR, where, e.g., the arsA and arsB genes have ATPase activity and arsC encodes an arsenate reductase that converts As(III) to As(V) (less toxic form) [82]. In contrast, Pb bioaccumulation in Bacillus spp. Is based on the pbrD gene [83], while the mechanisms leading to Cu accumulation by bacterial cells are encoded in the cusF gene, which contributes to the binding of copper in the periplasmic space [84]. In the case of mercury, bioaccumulation is related to the merC gene expression [85]. There are many studies confirming the ability of Bacillus spp. and related bacteria to bioaccumulate heavy metals [18]. For instance, metallothionein production has been detected in Bacillus spp., for example B. cereus and B. megaterium [86]. Importantly, bacteria of the genus Bacillus are capable of bioaccumulating various heavy metals. B. cereus RC-1, growing under various pH values and initial metal concentration, was able to remove a few heavy metals, such as Cu 2+ (16.7% maximum removal efficiency), Zn 2+ (38.3%), Cd 2+ (81.4%), and Pb 2+ (40.3%), with initial concentrations of 10 mg L −1 , at pH 7.0 [87]. Interestingly, the bio-removal of the two crucial metals-Cd 2+ and Pb 2+ -was paralleled by cellular uptake of Na + and Mg 2+ from the medium, respec-tively [87]. Alongside that, B. coagulans tolerated up to 512 ppm Cr(VI) concentration and had an MIC (minimum inhibitory concentration) of 128 ppm for Pb(II). Moreover, after 72 h, this strain had reduced 32 ppm Cr(VI) by 93%, and 64 ppm Pb(II) by 89.0% [88]. In addition, B. cereus BPS-9 has shown great potential of Pb accumulation (79.3%) [53]. The authors also found that, despite a reduction in the growth rate, the superoxide dismutase activity of B. cereus BPS-9 increased with increasing lead concentration, manifested by an increase in nitro blue tetrazolium (NBT) reduction from approximately 4% to 78% [53]. Moreover, bacteria of the genus Bacillus have been shown to have the ability to bioaccumulate the highly toxic arsenic for humans as well. For example, Singh et al. [89] detected the ability to bioaccumulate and volatilize As(V) in cultures of B. aryabhattai. The possibility of bioaccumulation of slightly less toxic Ni has also been recorded in bacteria of the species B. cereus. Naskar et al. [90] found that growing B. cereus M161 cells, depending on the growth phase of the culture, accumulated Ni(II) from the aqueous solution up to 80%, with surface binding (approximately 60%) dominating over intracellular accumulation (approximately 20%). On the other hand, the highest Ni(II) accumulation was recorded at 6.5, 32.5 • C, 2.5% inoculum volume, and 50 mL medium volume. However, no growth of the studied strain was observed at Ni ion concentrations beyond 50 mg L −1 [90]. Bioprecipitation Bioprecipitation is another bioremediation strategy that has been found in bacteria. This strategy involves converting the concentration of free metals to insoluble complexes, thereby reducing their bioavailability and toxicity. Microorganisms can facilitate precipitation via catalyzing oxidative and reductive processes, leading to the precipitation of contaminants including Pb, Cd, Cr, Fe, and U. In some microorganisms, it has also been discovered that they can release phosphates and increase the precipitation of metal phosphates, while other bacteria are capable of precipitating hydroxides or carbonates by forming alkanes (Figure 2) [91]. There are not a lot of studies about precipitation carried out by the Bacillus spp. Nevertheless, bacteria of the genus Bacillus can bio-precipitate the most toxic heavy metals, including lead and cadmium. For instance, the lead-resistant strains-B. iodinium GP13 and B. pumilus S3-were found to facilitate the precipitation of lead in the form of lead sulphide (PbS) [92]. Moreover, bacteria capable of precipitating lead into lead phosphate (Pb 3 (PO 4 ) 2 ) also include B. thuringiensis 016 [65]. Another example of bioprecipitation by the Bacillus spp. is the study by Li et al. [93]: using analyses of energy dispersive spectroscopy, X-ray photoelectron spectroscopy, and select area electron diffraction, the authors showed that B. cereus Cd01 was capable of Cd bioprecipitation into polycrystalline and/or amorphous cadmium phosphate and cadmium sulfide. Furthermore, Molokwane et al. [94] observed Cr(VI) reduction by precipitation after the application was enriched by a mixed culture of bacteria consisting of bacteria of the genus Bacillus, including B. cereus, B. thuringiensis, and related genera, such as PaeniBacillus and OceanoBacillus. The highest reduction of Cr(VI) in aerobic cultures was obtained at a high concentration of 200 mg L −1 , after incubation for 65 h [94]. To our knowledge, there have been no studies to date describing the possibility of bioprecipitation of other heavy metals (important from the point of view of pollution) by bacteria of the genus Bacillus. To summarize these subsections, it is worth adding that biologically enhanced precipitation may be used to remove metals and metalloids from a range of wastewaters, for example, acid mine drainage, electroplating, and tannery effluents [91]. Biological Removal of Heavy Metals Using Plant Growth-Promoting Bacteria There are still not a lot of studies on the bioremediation by Bacillus spp. in terms of application perspective; most studies focus on bioremediation mechanisms and study bioremediation efficiency in aqueous solutions with heavy metals [65,90,94]. Furthermore, only a few studies present results on the bioremediation activity of Bacillus spp. without the involvement of plants. For instance, for the bioremediation of cadmium, a combination of the bacteria B. megaterium with earthworms (Eisenia fetida) was used. According to the experiment with Cd-contaminated soil (Cd at approximately 2.5 mg kg −1 ), this combination was more effective than bioremediation using only earthworms [95]. On the other hand, the vast majority of studies on the application of Bacillus spp. as bioremediation agents are also related to phytoremediation [96], indicating that the bioremediation action of Bacillus spp. is not limited to playing a role in the geochemical cycle of heavy metals in soil [97,98]. Moreover, heavy-metal-accumulating plants supported by bacteria of this genus may be used to produce biogas, and the digestate meeting the criteria for heavy metal content can be used as fertilizer. Thus, this type of approach appears to be the most appropriate in the context of bioremediation involving this microbial group [99,100]. Metal-accumulating plants can be enhanced by metal-resistant plant growth-promoting bacteria (PGPB), which can increase the efficiency of bioremediation [56,101,102]. Therefore, the use of PGPB has recently been expanded to include the potential remediation of contaminated soils with crops, energy plants, and hyperaccumulators (plants capable of accumulating extremely large amounts of heavy metals in their aboveground parts, without suffering from phytotoxic effects) [100,103]. Therefore, the application of PGPB has recently been expanded to include remediation of contaminated soils in combination with plant hyperaccumulators, that is, plants capable of accumulating extremely large amounts of heavy metals in their aboveground parts, without suffering from phytotoxic effects. Plant stimulation by PGPB has been observed by many authors [11,101,[104][105][106][107]. PGPB may enhance plant growth either directly or indirectly. Mechanisms of direct action include production of various biological substances, for instance, indole-3-acetic acid (IAA), gibberellins, cytokinins, and 1-aminocyclopropane-1-carboxylic acid (ACC) deaminase, and atmospheric nitrogen fixation (nitrogenase production), or phosphorus solubilization [56,[108][109][110][111][112]. On the other hand, indirect mechanisms include the production of antibiotics (for example, cyclic lipopeptides), enzymes such as chitinases, cellulases, and glucanases, and siderophore production [105,[113][114][115][116]. The abilities that promote PGPB in phytoremediation processes include alleviating harmful effects caused by heavy metal pollution (e.g., reduced chlorophyll level and oxidative stress), boosting heavy metal tolerance of plants, and enhancing the accumulation of heavy metals in plant tissues [11,[117][118][119][120][121]. Thus, bacteria can facilitate heavy metal remediation through several mechanisms. For instance, phytohormones, such as IAA, causing root elongation and surface area (enhancing nutrients uptake), lead to an increase in the plants biomass, which results in a larger phytoremediation surface area of plants [99,122,123]. In addition, beneficial microorganisms may help reduce ethylene stress in plants growing in metal-contaminated soil through the deaminase ACC activity, which breaks down the ethylene precursor, ACC. It results in the development of longer roots, thus enabling the phytoremediation process to proceed more efficiently [56,94,124,125]. Plant stress due to the presence of heavy metals can also be alleviated by the secretion of antioxidant enzymes by PGPB [96]. Additionally, PGPB releases siderophores, iron-chelating compounds that enhance iron uptake by plant roots in hostile, metal-contaminated environments [104,[126][127][128]. Siderophores can also mobilize heavy metals, increasing metal accumulation by resistant bacteria (Figure 2) [56,[129][130][131]. In turn, plant endophytes (e.g., root endosphere endophytes) can also enhance the phytoremediation through bioaccumulation mechanisms [122]. So far, several studies have been reported describing the support of plant phytoremediation by plant growth-promoting bacteria of Bacillus spp. [11,[132][133][134]. Most of the research on this topic concerns experiments conducted under controlled conditions. For instance, a study conducted under gnotobiotic conditions showed the possibility of phytoextraction of cadmium-and lead-contaminated soils with bacteria of the genus Bacillus [135]. A heavy-metal-resistant, tomato growth-promoting strain Bacillus sp. RJ16 (Table 1) (which synthesized IAA, siderophores, and ACC deaminase to stimulate tomato root growth) led to an increase in Cd and Pb content in aboveground tissues from 92% to 113% and from 73% to 79%, respectively, in inoculated plants growing in heavy-metal-contaminated soil, compared to the control without inoculation [135]. Similarly, B. subtilis and B. pumilus were also able to facilitate the accumulation of various heavy metals, such as Cu, Cr, Pb, and Zn in tissues of Zea mays and Sorghum bicolor (greenhouse illuminated with natural light; total concentrations of heavy metals in soil: Cu 22,800, Cr 16,865, Pb 1900, Zn 32,500 mg kg −1 dry soil) [136]. Different patterns were noted by Saran et al. [137], who showed that after 2 months, sunflower (Helianthus annuus) seedlings grown on contaminated soil (Cd 0.42, Cu 1.02, Pb 5.48, Zn 12 mg kg −1 ) and inoculated with a plant growth-promoting strain B. proteolyticus ST89 (Table 1) achieved a 40% higher biomass production than uninoculated control plants, and accumulated 20% less Pb and 40% less Cd in aboveground plant parts, which indicates a reduction in phytotoxicity (controlled conditions, greenhouse) [137]. Interestingly, B. paramycoides ST9 (Table 1) increased the bioaccumulation factor of Pb three times and Cd six times, without suppressing plant growth [137]. However, most studies on this issue regarded the Cd bioremediation only. For instance, the application of rhizobacteria B. subtilis contributed to the reduction of the Cd bioavailability by approximately 39% in soil planted with ryegrass (Lolium multiflorum L.), and enhanced Cd accumulation in ryegrass by nearly 28%. Moreover, the inoculation of this strain increased plant antioxidant enzymes and enhanced biomass by nearly 21% [96]. Additionally, using 16S rRNA sequencing, the researchers also assessed the change in the native microbiota following the introduction of PGPB. The study found that the application of B. subtilis in the rhizosphere microbiota caused significant changes, e.g., enrichment of the population of phylum Proteobacteria [96], which includes bacteria of the genus Pseudomonas. An increase in the abundance of this genus could also enhance bioremediation efficiency [138]. Furthermore, for Cd bioremediation, bacteria of the Bacillus genus could also be used with other bacteria. For instance, application of B. mycoides and Micrococcus roseus strains in Cd-contaminated soil (100 and 200 mg Cd kg −1 ) planted with maize (greenhouse experiment) contributed to increased Cd uptake in shoots and roots compared to the control [139]. Another example of consortium with Bacillus sp. application was the study by Pinter et al. [140]: the authors used a consortium that consisted of B. licheniformis, Micrococcus luteus, and Pseudomonas fluorescens, which increased the concentration of As(III) in the leaves, and increased the plant defense mechanisms which helped reduce the toxic effects of As(III). There are also cases of bioremediation of Bacillus spp. strains with plants of less toxic metals than those mentioned above. For instance, He et al. [141] documented that strains B. subtilis and B. cereus significantly enhanced the accumulation of Zn and the shoot and root biomass compared to non-inoculated plants in experiments conducted on Orychophragmus violaceus (greenhouse under controlled climatic conditions). In addition, it has also been shown that members of the genus Bacillus can assist in the phytoremediation of nickel and promote plant growth in nickel-contaminated soils. The inoculation of B. juncea by rhizobacteria B. cereus SRA10 (Table 1) also contributed to a notably enhanced growth of root and shoot Ni accumulation (greenhouse conditions) [142]. A slightly different pattern was noted by Rajkumar et al. [143] in a study conducted on B. juncea, which showed that Bacillus sp. Ba32 (Table 1), capable of producing siderophores and solubilizing phosphate, could stimulate the growth of this plant under Cr contamination, but did not affect the amount of chromium Cr accumulated in the roots and shoots (growth chamber conditions) [143]. Regarding the above, the majority of bioremediation experiments involving Bacillus spp. have been carried out under simple or controlled conditions, including growth chamber and greenhouse studies. However, studies on the effectiveness of Bacillus spp. in bioremediation have also been conducted under outdoor conditions. Sheng et al. [144] carried out an outdoor pot experiment, which demonstrated that soil inoculation with biosurfactantproducing Bacillus sp. J119 (Table 1) significantly increased tomato plant biomass and Cd uptake in plant tissues, thereby increasing the phytoextraction potential in soil contaminated with the aforementioned metal [144]. Moreover, Zaidi et al. [132] showed that the strain B. subtilis SJ-101 (Table 1) exhibited a protective activity against Ni phytotoxicity in Brassica juncea grown in soil treated with NiCl 2 , at concentrations ranging from 250 to 1750 mg kg −1 (pot experiments in open-field conditions). In addition, the study showed that the studied strain also had the ability to produce indoleacetic acid (IAA) and dissolve inorganic phosphate, which promotes the growth of the studied plant. Furthermore, the study indicated the possibility of using B. subtlis SJ-101 as a bacteria-assisted phytoaccumulation of this toxic heavy metal from polluted sites [132]. Importantly, by conducting research on the PGPB application for bioremediation of heavy metals under controlled conditions, researchers are reducing the number of factors affecting their effectiveness. As is well known, the effectiveness of PGPB is influenced by soil properties (including chemical and microbiological properties), which are modulated by a range of factors including meteorological conditions [116]. Thereby, there is still a great need for research conducted under field conditions, which will provide a broader view of the interactions between bacteria, plants, and soil, thus leading to an essential step in the transition from laboratory experiments to practical applications. Unfortunately, field trials using PGPB in phytoremediation are rarely reported [100,122]. An instance of such a study using Bacillus spp. is the experiment conducted by Wu et al. [122]. The authors revealed that B. megaterium BM18-2 (mutant) ( Table 1) was able to increase Cd accumulation in the above-part of plants (hybrid Pennisetum) by nearly 29% (572 µg plant −1 ) compared to the control (pollutant concentration of cadmium was 0.50 mg kg −1 ). Conclusions Biological removal of toxic metals, using bacteria of the genus Bacillus, has now gained much interest in the context of bioremediation studies. Methods based on microorganisms including Bacillus spp. have several advantages over conventional physical and chemical techniques, including higher specificity, the possibility of using in situ, and ability of enhancing phytoremediation. In addition, the good adaptation of bacteria of this genus to unfavorable conditions and the fact that they produce spores provide a notable advantage over most bioremediation techniques. Importantly, the bioremediation efficiency using Bacillus spp. is constantly increasing. This phenomenon is generated by the use of statistical models to optimize bioremediation (in terms of conditions), the development of molecular techniques that will make it possible to use bacteria that show enhanced resistance to heavy metals in the future, and also application of PGPB as a phytoremediation-supporting agent, thereby increasing their potential in the bioremediation process. However, such solutions are still rarely translated into soil bioremediation, including the use of PGPB. Moreover, soil studies describing the possibility of using Bacillus spp. phytoremediation are generally conducted under controlled conditions. Changing the research approach by using bioremediation PGPB of the genus Bacillus more frequently in field conditions could contribute to the development of the research area, and consequently improve the application possibilities. Finally, it should also be mentioned that the development of next-generation sequencing (NGS), allowing more detailed insight into the crucial biodegradation pathways of these bacteria, provides an additional opportunity to enhance bioremediation by Bacillus spp. NGS techniques can also contribute to increasing the knowledge of the relationship between bacteria exhibiting bioremediation traits and the native microbial communities of contaminated environments. Such knowledge may lead to associations that will result in a synergistic cooperation. Data Availability Statement: The raw data supporting the conclusions of this article will be made available by the authors without undue reservation. Conflicts of Interest: The authors declare no conflict of interest.
7,694.4
2023-03-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Activation of protein kinase A signaling and inhibitions of glycine/glutathione biosynthesis involves in the transition from prediabetes to diabetes based on metabolomics data analysis Metabolomics is expected to identify potential metabolites and related pathways, and further reveal the underlying mechanisms of the transition from prediabetes to diabetes. In this study, a metabolomics-based gas chromatography-mass spectrometry (GC-MS) technique was used for demonstrating the serum metabolic proles among healthy, prediabetes, and diabetes at fasting state and 2h oral glucose tolerance test (2h OGTT) state. With Ingenuity Pathway Analysis (IPA) tool, the comparative analysis showed no signicant differences in the pathway analysis (P > 0.05) between prediabetes and diabetes at either fasting state or 2h OGTT state. The self-comparative analysis demonstrated the glycine/glutathione biosynthesis in diabetes were more inhibited than that in prediabetes or healthy control at 2h OGTT state compared with fasting state (P<0.05). In addition, the protein kinase A signaling pathway in prediabetes or diabetes was signicantly inhibited more than that in healthy control (P<0.05). Therefore, the glycine/glutathione biosynthesis and protein kinase A signaling could differentiate the diabetic subjects from the prediabetic and healthy control subjects, and may involve in prediabetes transition to diabetes. This study provided more metabolomics information for the transition from prediabetes to diabetes. glycine/glutathione biosynthesis activated. The characterization of prediabetes at 2h OGTT state was the protein kinase A signaling not activated and glycine/glutathione biosynthesis not activated. The characterization of healthy control at 2h OGTT state was the protein kinase A signaling activated and glycine/glutathione biosynthesis not activated. Introduction As one of the fastest growing epidemics worldwide, diabetes is a metabolic disorder characterized by elevated blood glucose concentration and increased insulin resistance leading to serious microvascular and macrovascular complications. Widespread changes in lifestyle and aging of the global population have resulted in an unprecedented rise in the prevalence of diabetes in the world, especially in low and middle income countries 1,2 . China has the largest number of subjects with diabetes, with high morbidity and mortality 3,4 . In 2013, the overall prevalence of diabetes in China was 10.9%, while the prevalence of prediabetes was 35.7% 4,5 . If blood glucose levels are higher than normal but lower than the threshold applied for the diagnosis of diabetes, prediabetes, is considered as a signi cant risk factor for diabetes and cardiovascular diseases 6 . Prediabetes is a reversible process and early intervention in prediabetic subjects can reduce the risk of diabetes by 40% to 58% [7][8][9] . Characterization and identi cation of subjects in the prediabetic state is important for the prevention, management and treatment of diabetes 10 . In recent years, an increasing number of studies have begun to focus on prediabetes transition to diabetes, and the pathogenesis is still unclear [11][12][13] . Accordingly, it is imperative to determine the potential mechanisms involved in the progression from prediabetes to diabetes. Diabetes is a systemic metabolic disorder, thus metabolomics technique is an appropriate approach to explore the pathogenesis of prediabetes and diabetes from the systemic metabolism. Increasing numbers of studies have explored the relationship between a wide range of metabolites and diabetes using metabolomics technique [14][15][16] . Metabolomics is the quantitative measurement of the dynamic multiparametric metabolic response of living systems to pathophysiological stimuli or genetic modi cations 17 . By measuring and mathematically modeling changes in the products of metabolism (low molecular weight biochemicals including amino acids, sugars, nucleotides, organic acids, and lipids) found in biological uids and tissues, high-throughput metabolomics technology have provided fresh insights into the pathophysiological pathways and understanding of the disease 18 . As a monitoring tool for metabolism and signaling pathways, metabolomics is expected to identify potential metabolites and to reveal the metabolic changes and the underlying mechanisms of the progression from prediabetes to diabetes. In this study, a metabolomics-based gas chromatography-mass spectrometry (GC-MS) technique was used for demonstrating the serum metabolic pro les among healthy, prediabetes and diabetes. On the basis of metabolomic data combined with bioinformatics analysis, we aimed to unveil the metabolism and signaling pathways involved in the pathogenesis of prediabetes transition to diabetes. Study subjects This study is a community-based cross-sectional investigation at Hangxin Hospital, Beijing, China, from July to November 2010. A total of 4000 community subjects underwent the health screening program. According to the diagnosis and classi cation criteria proposed by the American Diabetes Association (ADA) in 2010 19 , diabetes was de ned as someone with (1) a fasting plasma glucose (FPG) level of 7.0 mmol/L or higher, or (2) a 2h oral glucose tolerance test (2h OGTT) level of 11.1 mmol/L or higher, or (3) a glycosylated hemoglobin A1c (HbA1c) concentration of 6.5% or more. If the subject met one or more of these criteria, he/she was categorized as diabetes. Prediabetes was de ned as someone with (1) FPG ≥ 5.6 mmol/L and < 7.0 mmol/L, or (2) 2h OGTT level of ≥ 7.8 mmol/L and < 11.1 mmol/L, respectively, or (3) a HbA1c concentration of ≥ 5.7% and < 6.5%, subjects without a prior diagnosis of diabetes. If the subject met one or more of these criteria, he/she was categorized as prediabetes. The inclusion criteria included age level of > 18 and < 75 years, local residents in Beijing, complete data measurements and informed consents. Subjects with cardiovascular and cerebrovascular diseases, mental disorders, gastrointestinal disease, nephropathy, metabolic syndrome, malignant tumors, pregnancy or incomplete recorded information were excluded from this project based on their medical records. After investigation, a total of 105 subjects (69 males and 36 females) including 35 subjects for diabetes group, 35 subjects for prediabetes group and 35 subjects for healthy control group with complete data were nally enrolled in this study. The study was performed according to the guidelines of the Helsinki Declaration. A standard protocol was designed by Institute of Basic Research in Clinical Medicine and was approved by the Ethics Committee of China Academy of Chinese Medical Sciences. Written informed consent was obtained from all subjects. Data measurements and blood sampling All subjects were asked to ll out a questionnaire focusing on demographic characteristics (age, gender, education, etc.), anthropometrics (height, weight and waist circumference (WC)), medical history, and health-related behavior under the guidance of physicians. Fasting blood samples and 2h OGTT blood samples were drawn via venipuncture from the study subjects by clinical nurses. After storage for 2 h at 4°C, the blood samples were centrifuged at 3000 rpm for 10 min. The obtained serum was divided into two parts: one part was used for the measurement of FPG, HbA1c, 2h OGTT, total cholesterol (TC), triglyceride (TG), high-and low-density lipoprotein cholesterol (HDL-C, LDL-C), and uric acid (UC) concentrations according to the manufacturers' instructions for the respective commercial test kits. The remaining 100 µL serum was added to 320 µL of methanol, and the mixture was vortexed for 60 s. After centrifugation at 15000 rpm for 10 min at 4°C, the supernatant was stored at -80°C for GC-MS analysis. Blood sample (6mL) was randomly divided into six parts and extracted identically. These six samples were injected continuously to verify the repeatability of the sample preparation method. 20 µL are extracted from each blood sample to produce a mixed quality control (QC) sample and 100µL aliquot is extracted from this mixed sample by the same method. The mixed sample is used to provide a representative "mean" sample containing all the analytes encountered during the analysis and to verify the stability of the GC-MS system. GC-MS analysis GC-MS analysis was performed using GCMS-QP2010 Plus (Shimadzu, Kyoto) and capillary column (Rxi-50, 30 m×0.25 mm, 0.25 ft m).With helium as carrier gas, the rate was 1.0 mL/min. The oven temperature varied from 60 ~ 80℃ at 5℃/min, then from 80 ~ 90℃ at 2℃/min (keep for 3 min), from 90 ~ 150℃ at 10℃/min (keep for 1 min), from 150 ~ 220℃ at 1℃/min, and from 220 ~ 290℃ at 10℃/min. The injector and interface temperature were maintained at 250℃.The mass spectrum in electron impact mode was generated at 70 eV. The ion source temperature was maintained at 250℃.One sample of 1μL was injected with a split mode injection (split ratio 60:1).Based on the linear retention index (RI) and the comparison of MS data with reference compounds, these components were preliminarily identi ed. \The linear retention indices of all components were determined by homologous n-alkanes (C 10 -C 40 ).These components were identi ed by comparing with the mass spectra of NIST05 and NIST05S. 20 Data processing and statistical analysis The number of components in different samples was selected according to the retention time of the common peaks. The retention time and peak areas of GC-MS were obtained in one table. The table is then used as input data for multivariate statistical analysis. A multivariate statistical analysis, including unsupervised principal component analysis (PCA) and partial least-squares-discriminant analysis (PLS-DA), was used for metabolic pro les using SIMCA-P 11.0 statistical package (Umetrics AB, Umeå, Sweden).SAS 9.1.3 Statistical package (order No.195557) for statistical analysis. Chi-square test was used in attribute data analysis. The measured data were normally distributed. Analysis of variance was used for the comparison between groups. P values < 0.05 were set as signi cant for all of the statistical tests. Pathway analysis The analysis of bio-functions and canonical pathways for the candidate metabolites were conducted by using the Ingenuity Pathway Analysis system (IPA, Ingenuity ® Systems, http://www.ingenuity.com), to gain insight into the typical metabolic alterations associated with the biomarkers and the mechanisms related to the transition from prediabetes to diabetes. Baseline characteristics of the study subjects The clinical and biochemical characteristics of the subjects are shown in Table 1. The subjects with prediabetes or diabetes tended to have signi cantly higher age, weight, body mass index (BMI), WC and FPG compared with the healthy control subjects (P < 0.05). The subjects with diabetes had signi cantly higher HbA1c, FPG and 2h OGTT than the subjects with prediabetes (P < 0.05), and there was no signi cant difference in HbA1c, FPG and 2h OGTT between the subjects with prediabetes and healthy control subjects. Moreover, gender, TC, TG, HDL-C, LDL-C and UA were not signi cantly different among the three groups. Testing with the GC-MS method Six samples from random blood samples were injected continuously to assess their repeatability. According to the different chemical polarity and m/z value, ve kinds of commonly used injection extract ion chromatograms (EICs)were screened. The relative standard deviation (RSDs)of peak area is 4.13% 1 3.13%, and the relative standard deviation of retention time is 0.04% ~ 0.98%. The stability of the method in large-scale sample analysis is proved by the test of pooled QC samples. PCA results showed that QC samples were closely clustered. In addition, the peak area, retention time and quality accuracy of the ve EICs selected from the ve QC samples also showed good system stability. The RSDs of the ve peaks were 4.94% ~ 14.88%, 0.03% ~ 1.10%, and 0.14 x10 -4 % ~ 0.76 x10 -4 %, respectively. The results showed that the large scale sample analysis has no signi cant effect on the reliability of the data. 20 3.3. Identi cation of the differential metabolites Typical base peak chromatograms (BPCs) of serum samples were obtained from diabetes, prediabetes and healthy control. Multiple pattern recognition methods of PLS-DA were adopted on the basis of the metabolic changes in these subjects as revealed by BPCs. These methods facilitated the classi cation of the metabolic phenotypes and helped to identify the differential metabolites. As shown in Table 2, 8 differential metabolites were identi ed at fasting state and 14 differential metabolites were identi ed at 2h OGTT state in the subjects with prediabetes. As shown in Table 3, 14 differential metabolites were identi ed at fasting state and 16 differential metabolites were identi ed at 2h OGTT state in the subjects with diabetes. Pathways analysis With IPA analysis, as shown in Figure 1, the common pathways in both prediabetes and diabetes at fasting state were bupropion degradation, glycine biosynthesis I and glycine biosynthesis III (P < 0.05). As shown in Figure 2, the pathways in both prediabetes and diabetes at 2h OGTT state were glycine biosynthesis III, L-dopachrome biosynthesis, growth hormone signaling, maturity onset diabetes of young signaling, and bupropion degradation (P < 0.05). As illustrated in Figure 1 and 2, there were two pathways for differential metabolites in both prediabetes and diabetes at fasting state and 2h OGTT state: glycine biosynthesis III and bupropion degradation. However, the comparative analysis between prediabetes and diabetes at either fasting state or 2h OGTT state showed no signi cant differences of metabolism pathways were found (P > 0.05). The pathways in the three groups at 2h OGTT state compared with fasting state were shown in Figure 3. Protein kinase A signaling pathway showed statistical signi cance in healthy (2h OGTT state vs. fasting state) (P < 0.05), and no statistical signi cances in both prediabetes and diabetes (2h OGTT state vs. fasting state) (P > 0.05); glycine biosynthesis I and glutathione biosynthesis pathways showed much more statistical signi cance in diabetes (2h OGTT state vs. fasting state) than in prediabetes and healthy (2h OGTT state vs. fasting state) (P < 0.05). The metabolite changes in the three pathways with statistical signi cances in the healthy, prediabetes and diabetes at 2h OGTT state compared with fasting state were shown in Figure 4. In the upper part of glycine biosynthesis I pathway, L-serine showed fold change, but no signi cant changes (from -1.242 to -1.267) in healthy, prediabetes and diabetes (2h OGTT state vs. fasting state). In the glutathione biosynthesis, interestingly, glycine only shown in the diabetes, and the results suggested the pathway was inhibited. In the protein kinase A signaling, the results showed the pathway was inhibited in healthy, or otherwise, the pathway was activated in prediabetes and diabetes. Discussion Diabetes is the result of prediabetes progression. Despite a considerable amount of studies being collected and analyzed regarding diabetes, the molecular mechanisms of prediabetes transition to diabetes is still unknown 21 . Deciphering the biomarkers and mechanisms of prediabetes transition to diabetes is vital to preventing disease progression. In the present study, the serum metabolite changes of prediabetes and diabetes based on serum metabolic pro les were identi ed using GC-MS technique. Pathway analysis showed no signi cant differences were found between prediabetes and diabetes at either fasting state or 2h OGTT state. Meanwhile, the glycine biosynthesis III and bupropion degradation were two common metabolism pathways in both prediabetes and diabetes at fasting state and 2h OGTT state. The bupropion with naltrexone is a combination therapy for obesity and obesity with diabetes, and the combination could signi cantly improve the lipid metabolism, glucose metabolism and insulin resistance [22][23][24][25] . Glycine is the proteinogenic amino acid of lowest molecular weight, harboring a hydrogen atom as a side-chain 26 . In the present study, glycine was found down-regulated in serum in both prediabetes and diabetes. It has been con rmed that glycine pathway is associated with diabetes, and the decline in glycine levels is involved in the pathogenesis of glucose intolerance, insulin resistance and diabetes [27][28][29] . The two pathways, including glycine biosynthesis I and glutathione biosynthesis, are crucial for prediabetes transition to diabetes in this study. We found glycine/glutathione biosynthesis levels in diabetes were more activated than that in prediabetes or healthy control at 2h OGTT state compared with fasting state. Likewise, researchers found the top-ranking metabolites with insulin resistance were in the glycine biosynthesis and glutathione biosynthesis pathways 30-32 . The reduced glycine levels has been considered as the most robust and consistent amino acid markers for prediabetes and incident diabetes 33 . The glutathione, often referred to the master antioxidant, participates not only in antioxidant defense systems, but also many metabolic processes 34 . There is increasing evidence that dysregulation of glutathione synthesis contributes to the pathogenesis of insulin resistance and incident diabetes 35,36 . Nevertheless, there has been no study on the relationship between the pathways and prediabetes transition to diabetes. Protein kinase A is a multi-unit protein kinase that mediates signal transduction of G-protein-coupled receptors through its activation by adenyl cyclase-mediated cAMP 37 . The cAMP-dependent protein kinase A pathway, known to promote cell growth and delay apoptosis 38 , could regulate glucose homeostasis at multiple levels including insulin and glucagon secretion, glucose uptake, glycogen synthesis and breakdown, gluconeogenesis and neural control of glucose homeostasis 39 . Therefore, the glycine/glutathione biosynthesis and protein kinase A signaling may all involve in prediabetes transition to diabetes. As illustrated in Figure 3 and 4, compared with fasting state, the characterization of diabetes at 2h OGTT state was the protein kinase A signaling not activated and glycine/glutathione biosynthesis activated. The characterization of prediabetes at 2h OGTT state was the protein kinase A signaling not activated and glycine/glutathione biosynthesis not activated. The characterization of healthy control at 2h OGTT state was the protein kinase A signaling activated and glycine/glutathione biosynthesis not activated. This study provided a good template for determining the differential metabolites of subclinical disease status and a better understanding of prediabetes progression based on metabolomics. There are, however, several limitations of this study. The subjects in this study were all o ce workers in Beijing, and the prevalence of prediabetes and diabetes may be higher than that in a rural area. Further studies with a larger sample size and more detailed information collection are needed. Figure 1 The pathways in both prediabetes and diabetes at fasting state. The pathways included bupropion degradation, glycine biosynthesis I and glycine biosynthesis III (P < 0.05). Declarations The pathways in the three groups at 2h OGTT state compared with fasting state. Protein kinase A signaling pathway showed statistical signi cance in healthy (2h OGTT state vs. fasting state) (P < 0.05), and no statistical signi cances in both prediabetes and diabetes (2h OGTT state vs. fasting state) (P > 0.05); glycine biosynthesis I and glutathione biosynthesis pathways showed statistical signi cance in diabetes (2h OGTT state vs. fasting state) than in prediabetes and healthy (2h OGTT state vs. fasting state) (P < 0.05). Figure 4 The metabolite changes in the three pathways with statistical signi cances in the healthy, prediabetes and diabetes at 2h OGTT state compared with fasting state. In the glycine biosynthesis I pathway (upper), L-serine showed fold change, but no signi cant changes (from -1.242 to -1.267) in healthy, prediabetes and diabetes (2h OGTT state vs. fasting state). In the glutathione biosynthesis (middle), interestingly, glycine only in diabetes, the results suggested the pathway was inhibited in diabetes. In the protein kinase A signaling (lower), the results showed the pathway was inhibited in healthy, or otherwise, the pathway was activated in prediabetes and diabetes.
4,355.4
2020-12-02T00:00:00.000
[ "Medicine", "Chemistry" ]
Materials for electronically controllable microactuators Abstract Electronically controllable actuators have shrunk to remarkably small dimensions, thanks to recent advances in materials science. Currently, multiple classes of actuators can operate at the micron scale, be patterned using lithographic techniques, and be driven by complementary metal oxide semiconductor (CMOS)-compatible voltages, enabling new technologies, including digitally controlled micro-cilia, cell-sized origami structures, and autonomous microrobots controlled by onboard semiconductor electronics. This field is poised to grow, as many of these actuator technologies are the firsts of their kind and much of the underlying design space remains unexplored. To help map the current state of the art and set goals for the future, here, we overview existing work and examine how key figures of merit for actuation at the microscale, including force output, response time, power consumption, efficiency, and durability are fundamentally intertwined. In doing so, we find performance limits and tradeoffs for different classes of microactuators based on the coupling mechanism between electrical energy, chemical energy, and mechanical work. These limits both point to future goals for actuator development and signal promising applications for these actuators in sophisticated electronically integrated microrobotic systems. Graphical Abstract Introduction In the past 10 years, microactuators have seen significant reductions in accessible length scales (sub 1 µm), control voltages (~1 V), and power consumption (1-10 nW), thanks to a host of new materials.As highlighted in Figure 1, these advances have enabled mechanical systems to easily integrate with the tiny packages of sensors, power, and computation, pointing to a near term future in which autonomous, programmable machines can help shape and control the microworld. Arguably, the dominant actuation approach at the microscale is to use bending/folding mechanisms for moving parts. 1 Bending avoids issues with stiction, is well suited to the twodimensional (2D) patterning used in lithographic fabrication, and, because pure elastic bending is essentially scale invariant, allows designs to be scaled up or down in size by proportionally altering all the dimensions.With an eye toward microsystems, we focus here on bending actuators where the operating voltage is under 10 V and the curvature is larger than 1 mm −1 (Figure 2) as actuators outside of these bounds require nontrivial voltage conversion or operate at too large of a curvature to be useful in a submillimeter machine.Demonstrated electronically controlled bending microactuators that fall within these constraints operate via three mechanisms: thermal, 2,3 electrochemical, [4][5][6][7][8] and piezoelectric. 9n emerging question for this field is how to quantify actuator performance at the microscale.Given the distinct physics of tiny machines, it stands to reason that microactuators should be gauged by figures of merit that are different from those used to describe their macroscale cousins.For instance, power-to-weight ratio is not useful in a world where gravitation forces are negligible compared to drag and surface forces.Further, for many microactuators, energy consumption scales with area, making areal figures of merit more informative than volumetric ones. This article seeks to establish a new framework for comparing microscale actuators, giving rules that are useful in selecting a given actuator for a given task and for quantifying progress as new materials for microactuators come into focus.We argue that key figures of merit at the microscale include durability, the force normalized by width-to-length ratio (force per square), efficiency of work, and response time as these considerations are well suited both to the physics of the microworld and the design considerations of microfabrication.We also note that these metrics are not fully orthogonal: for instance, in many actuators, we find efficiency and strain are fundamentally coupled as are the force output and response time.These results show where improvement could be gained, and where current actuators are near fundamental bounds. Thermal actuators Thermal microactuators achieve bending by heating layered stacks of two or more materials with different thermal-expansion properties.They can be controlled electrically via Joule heating or externally by laser illumination.Thermal microactuators have many attractive features, including fast actuation, large force outputs, relatively low driving voltages (1-10 V), and repeatable bending over many cycles.At the submillimeter scale, prior works have demonstrated electrically controlled thermal actuation with strains up to ~2% by leveraging thermal phase transitions 2 and shape-memory behavior by reflowing polymers during actuation. 3ne major challenge for using thermal actuators, especially when used in autonomous microsystems such as robots, is their large power consumption.Previous examples of microactuators at the 100 µm to 1 mm scale driven by Joule heating require ~1 mA currents and powers around 1 mW.These large electrical requirements stem from heat lost to the environment: dimensional analysis indicates that the power lost to the surrounding environment is ∼ κL T , where κ is the thermal conductivity of the surrounding medium, ΔT is the difference in temperature between the actuator and its environment, and L is the length of the actuator.For submillimeter actuators in air ( κ ≈ 10 -2 W/mK), the minimum power required to heat the actuator by 10 K is about 100 µW, a challenging constraint for an untethered microrobot.While there is significant ongoing progress on Electronically Controlled Microactuators Figure 1.Actuators that respond to electronic control signals yet operate at dimensions under a millimeter have enabled a variety of remarkable applications.Such tiny devices can be used to make microscopic robots, 6 turn 2D lithographic patterns into controllable 3D origami structures, 7 pump liquids under user command with electronically controlled cilia, 25 and manipulate cells and microorganisms with microgrippers. 15These applications are evolving rapidly, thanks to electronic control: circuits can be used to generate behaviors that respond to external stimuli 6 or are reprogrammed on-demand. 26can readily integrate with circuits at the microscale.In recent years, several classes of actuators have emerged that meet these demands, spanning operating voltages from ~100 mV to 3 V and curvatures up to 1 µm −1 .Broadly, actuators can be classified as electrochemical (bulk 4,5,13,14 and surface 7,8 ), thermal, 2,3 and piezoelectric. 9(b) Examples of each class of actuators have been demonstrated at the microscale, including surface electrochemical actuators 7 (top panel), bulk electrochemical actuators using the lithiation of silicon to buckle microscale beams 14 or the charging of polymer layers to control microgrippers 4,5,13 (second from top), thermal actuators for microscale grippers 1 and origami 2 (second from bottom), and nanometer-thick aluminum nitride piezoelectric actuators 8 (bottom panel). batteries for small-scale robots (as highlighted by the article by Schmidt and Zhu in this issue 10 ), the best existing batteries at any scale have a volumetric energy density of about 1000 Wh/L, 11 giving a 100 µm cubic battery enough energy to power fewer than 100 actuation cycles.Even photovoltaic power, which scales as length squared, would be insufficient to power a thermal actuator: a 100-µm-square silicon photovoltaic with a 10% power-conversion efficiency produces about 1 µW of power in full sunlight, two orders of magnitude too low to drive actuation. A clever workaround to thermal actuation's high power demand was demonstrated in Han et al. and is shown in Figure 3a: rather than use onboard power, a targeted laser directly heats the actuator, enabling untethered submillimeter robots that walk on land. 12The authors maintained the optical power link over long ranges by integrating retroreflective materials on the robot's body.Indeed, thermal actuation is well suited to terrestrial microrobots because walking on a surface demands forces large enough to overcome adhesion, while legged locomotion requires high durability, two key features of this actuator class. Electrochemical actuators Several groups have demonstrated microactuators that bend in response to electrochemical charging.These actuators operate in conductive solutions, and bending is controlled by transferring charge between the solution and the actuator.Demonstrated examples include conductive (conjugated) polymers, 5,6,13 metals, 7,8 and battery materials. 14onjugated polymers electrochemical actuators leverage redox reactions of the polymer with ions in the solution.These reactions generate a strain in the conductive polymer layer which, when stacked with a passive layer, generates bending. 15he first work on these polymer actuators used electrodeposited polypyrrole (Ppy) on gold to demonstrate micron-thick bilayer actuators with ~0.1 µm −1 curvature changes and forces 12 Direct laser actuation heats the hinges, causing them to bend and the robots to walk.These microrobots walk on land at speeds close to a body length per second and can be tracked with onboard retroreflectors.(b) Microrobots with surface electrochemical actuators and onboard digital control electronics. 27Both the legs and the circuit on these robots are powered by light.The onboard microelectronic circuit generates clock signals to drive the legs and set the gait of the robots.These microrobots operate in aqueous environments, move at close to 0.1 body lengths per second, and can change behavior in response to optically delivered commands.PVs, photovoltaics; IC, integrated circuit; SEA, surface electrochemical actuator. of about a micronewton per square. 3Since then, Ppy has been used to make robotic grippers with multiple joints, 16 microactuators with onboard strain sensing, 17 and ciliated surfaces for pumping fluid. 18More recently, PEDOT:PSS microactuators have been demonstrated. 5Although PEDOT:PSS exhibits smaller strains than Ppy actuators, it can be spin-coated onto a variety of surfaces, simplifying fabrication. Conjugated polymer microactuators are frequently used in microrobotic systems for their combination of high forces and curvatures.Their primary drawbacks are related to speed and durability.The maximum frequency of actuation is about 1 Hz, 4,17 limited by mass transport. 4Ppy actuators also fail after 1000-10,000 cycles due to delamination between the active layer and the metal film onto which it is deposited. 4,15ecent works have demonstrated actuation through at least 1000 cycles without delamination by adding layers to move the neutral axis of bending closer to the metal/Ppy interface, decreasing the strain this interface experiences. 17hile previously demonstrated CP microactuators require an electrolyte, they operate in biologically relevant saline solutions, making them promising actuators for biomedical applications. 18][21][22][23] Battery materials are known to expand dramatically when charging, giving an interesting route to microactuators with high strains.In particular, Xia et al. demonstrated microscale beam bending via lithiation of a microstructured silicon anode. 14By leveraging the >300% expansion of silicon during lithiation, they demonstrate a variety of electrically controlled bending and buckling structures and programmable metamaterials.These structures produce large bending forces and, despite being 100's of nm thick, bend to 10-100 µm radii of curvature ( R c ). Although large bending forces and curvatures recommend battery materials as microactuators, lithiated silicon also has several downsides.First, the actuation response time is on the order of minutes, limited by the diffusion of lithium into the silicon.In principle, this could be improved by using thinner active layers, trading force output for response time.Second, the expansion is so dramatic that repeated charging and discharging fracture the silicon and degrade the actuation response over just a few cycles.Indeed, expansion and resultant mechanical fracture are well-known issues with silicon battery anodes. 24Actuators where silicon is the only active material also require a solution of lithium salt to operate in, though future battery-material-based actuators could include anode/solid electrolyte/cathode trilayer stacks, creating fully self-contained microactuators. Surface electrochemical actuators (SEAs) leverage surface chemistry of metals to drive bending at the micron scale. 7,8EAs consist of ultrathin platinum capped on one side by a passive layer with a total stack thickness of about 10 nm.In aqueous environments, surface electrochemistry at the platinum surface-either adsorption of ions on the platinum 7 or oxidation of the platinum surface 8 -generate surface stresses that cause bending.SEAs exhibit several unique features: they bend to curvatures of about ~1 µm −1 , operate with approximately nW input power, exert forces of about 1-10 nN, exhibit frequencies between 10 and 100 Hz (limited by fluid drag), and actuate repeatedly over thousands of cycles.The same structures also function as chemically responsive microactuators in air. 25hese actuators demonstrate higher curvatures and operation frequencies than other electrochemical microactuators at appreciably lower voltages and powers; the tradeoff for these benefits is a comparatively low force output.Because SEAs rely on surface stresses for actuation, gains in force from making the actuator thicker are limited: force increases linearly with thickness, t, and radius of curvature decreases as t −2 .Similar to other electrochemical microactuators, SEAs' operation is currently limited to aqueous electrolyte environments.Ongoing work related to SEAs is exploring microactuators that use other metals with electrochemical activity or integrating electrolytes into a single packaged actuator. Nonetheless, the low power requirements, relative ease of fabrication and integration with microelectronics, and durable actuation over many cycles make them a model actuator for microrobotic systems.The fact that SEAs actuate at hundreds of millivolts allows them to trivially integrate with semiconductor electronics, because the same voltage scales are required to drive a transistor into saturation.In turn, SEAs have been used to build electronically integrated versions of microscale origami structures, 8 micro-cilia arrays, 26 and microscopic robots powered and controlled with onboard circuits. 7,27ne of the most sophisticated examples of electronics integration was demonstrated by Reynolds et al. who constructed fully autonomous robots, shown in Figure 3b. 27Each machine, powered by onboard solar cells, can walk across a substrate using onboard semiconductor electronics to control gait.Beyond walking, robots within this work could alter behavior on command: a user can send instructions as time varying optical signals, which the robot decodes and implements by speeding up its gait cycle.Ongoing work seeks to extend these capabilities further, incorporating more sophisticated control electronics such as microprocessors, sensors, and memory. 28 Piezoelectric actuators At the millimeter scale and larger, piezoelectric actuators are ubiquitous because of their repeatable actuation, highfrequency response, and high efficiency.0][31][32][33] These systems typically use lead zirconate titanate (PZT) and achieve bending radii approaching 1 mm with micron-thick layers operating 10 s of volts. 33,34However, making bending piezoelectric actuators at the micron scale is a challenge.Piezoelectric materials generate relatively low strains, from 0.01 to 1 percent.To achieve submillimeter radii of curvature with a multilayer stack that includes a piezoelectric, top, and bottom electrode layer, the active material must have a thickness of 10-100 nm.This imposes stringent materials constraints for growth because the film must be crystalline and deposit on ultrathin metal electrodes to achieve actuation.One example of piezoelectric actuators with submillimeter bending consists of an aluminum nitride piezoelectric layer and platinum electrodes, all less than 30 nm in thickness. 9Because they are so thin, these microactuators operate at <1 V, bend to about 300 µm radii of curvature and exhibit fast (up to megahertz) actuation. 35espite relatively smaller displacements compared to electrochemical and thermal actuators, piezoelectric actuators' high operating frequencies, high efficiencies, and low power consumptions make them promising candidates for a variety of microrobotic applications.For instance, because they operate in air by design, they are a natural fit for terrestrial robots.Likewise, larger (>100 μm) microscopic robots could use high-frequency operation to compensate for low displacement, allowing the robot to take fast, small steps to achieve reasonable speeds. Strain, efficiency, and durability Energy storage is difficult in microsystems due to the small available volume, and instead actuators are often connected to continuous power sources to achieve long-term operation.Consequently, the efficiency or proportion of power utilized for work can often be less important than the nominal power draw.Indeed, Figure 4a shows a plot of power required for actuation against efficiency, η, which we define as the ratio of mechanical work necessary to deform the actuator to a given deflection to the electrical energy expended during deformation.We note that although many actuators have comparable efficiencies, the actual power input can differ by several orders of magnitude.For instance, compare SEAs and thermal actuators.Both are nearly equal in efficiency but differ dramatically in nominal power: SEAs consume nW, whereas thermal actuators consume >100 µW.As a result, it is straightforward to integrate SEAs with an onboard power source in a microrobot, while, as noted earlier, it is difficult to do the same with a thermal actuator. Although efficiency may not be sufficient to determine which actuator is well suited to an application, it can still be used to describe fundamental limits of actuator performance.For instance, Figure 4b shows the efficiency for each microactuator plotted against the strain due to bending, ǫ ≈ R c −1 t .Strains vary from 10 -4 to 10%, and the data show a roughly linear relationship between strain and efficiency for electrochemical and thermal actuators, with piezoelectric actuators as a distinctly different class. Because the mechanical work for any bending actuator, regardless of mechanism, is given by U m ∼ E y twLǫ 2 , where E y is the Young's modulus and w is the width, different relationships between efficiency and strain must arise because of different scaling laws linking electrical work ( QV ) and deformation.PV efficiency is about 10%).Even for microactuators with comparable efficiencies, power consumption can vary over almost six orders of magnitude.(b) Efficiency versus strain for microactuators shows a general relationship between the two: more efficient actuators operate at higher values of strain.In the case of electrochemical actuators, this result can be rationalized by looking at the dominant scaling behavior for electrical and mechanical energy.Moreover, if efficiency and strain scale together, then there are fundamental limits on actuator performance set by the elastic limits of the constituent materials.Indeed, vertical lines show where constituents for electrochemical actuators would begin to fail, indicating that within this class, further improvements in efficiency could be impossible without material innovation.Data are drawn from the following references: electrochemical bulk, 4,5,13 electrochemical surface, 7,8 battery, 14 thermal, 2,3 and piezoelectric. 9SEAs, surface electrochemical actuators; EAPs, electroactive polymers. One possibility is that the strain is proportional to electrical work ( QV ∝ ǫ ), leading to an efficiency that is also proportional to strain.This is the most common case for microactuators, applying to thermal, dielectric electroactive polymers, and SEAs.For instance, dielectric electroactive polymers actuators operating in the elastic limit experience a strain ǫ = QV /E y twL , where t is the thickness of the polymer layer.SEAs also follow this trend: QV is proportional to the surface stress, which in turn is proportional to strain.Although thermal actuators do not rely on charge storage, a similar argument leads to the same scaling.For a thermal actuator, the minimum energy input to drive bending is linear in ΔT.Because strain and temperature are proportional in thermal actuation, the input energy scales with strain and, by extension, so does the efficiency. A second possibility, displayed by piezoelectric actuators, is that QV ∝ ǫ 2 .A piezoelectric actuator behaves similar to a capacitor in the electrical domain, while the built-in electrical polarization of the material causes strain to scale linearly with applied voltage.In this case, the efficiency is a strainindependent material parameter (i.e., the electromechanical coupling).This leads to actuators, including the green point on Figure 4b, with high efficiencies despite having relatively low strains. The previously discussed analysis points to a simple principle for improving the efficiency of many actuators: increase strain.However, the yield strains of the actuator's constituent materials set a limit on how much can be gained by this approach as past this point actuators fail over repeated cycling.Figure 4 indicates yield strain for two common microactuator materials, platinum and polypyrrole, showing that many actuators are already operating at, and in some cases past, their yield strain limit.Indeed, actuators start to fail in these high-strain cases: SEAs achieve higher curvatures and efficiencies for initial cycles when driven via oxidation instead of surface adsorption, but decrease in actuation amplitude by a factor of two over several thousand cycles; polypyrrole actuators with metal layers delaminate over about 1000 cycles; and, most drastically, lithiated silicon, while achieving more than 10% strain in a hard material, fails over about 10-100 cycles.In general, an actuator's best balance between efficiency, actuation amplitude, and repeatability over many cycles requires operating just below the yield strain of their lowest yield strain material.Achieving a higher efficiency without sacrificing durability would require new materials with larger elastic windows, stronger coupling between electrical and mechanical energy, and/or lower operating voltages. Force and response time Response time and force are critical variables for any actuator and Figure 5 shows they vary broadly for microactuators.The actuators surveyed here differ by almost ten orders of magnitude in response time and six orders of magnitude in force, enabling a wide range of possible applications.Yet there is also an evident tradeoff between the two: actuators that supply higher forces also tend to work at slower speeds. For both thermal and electrochemical actuators, this dependence arises because relaxation time and force both scale with the thickness of the actuator: thicker actuators can supply more force, but they also take longer to equilibrate.Specifically, for a bending actuator, the force per square scales quadratically with thickness, ∼ E y t 2 ǫ while thermal (electrochemical) actuators have response times that scale linearly (quadratically) with thickness.Specifically, the thermal relaxation is set by the rate of passive cooling to the environment as τ ∼ CLt κ , where C is the volumetric specific heat capacity, L is the lateral dimension of the actuator, and κ is the thermal conductivity of the environment, while chemical equilibration depends on diffusion of chemical species into the actuator material giving τ ∼ t 2 /D, where D is the diffusion coefficient. The fact that few high-speed, high-force actuators exist poses interesting design challenges for microrobotics.For instance, walking robots that operate in air require larger force scales to overcome adhesive forces.Yet given the available actuator choices, a robot of the same size could potentially walk faster through water, despite viscous drag forces, thanks to faster actuators.Future work may be needed to mitigate this tradeoff, potentially improving interlayer transport of .Force per square against actuator response time shows that microactuators can operate over a wide range, but evidently face a performance tradeoff.An engineer can currently choose between a fast, weak actuator or a slow strong one, but no actuator achieves both high force and fast response.For thermal and electrochemical actuators, these limits arise from transport constraints: an actuator needs to reach thermal (chemical) equilibrium to impart force.Future work could engineer these transport properties, thereby improving response time.Data are drawn from the following references: electrochemical bulk, 4,5,13 electrochemical surface, 7,8 battery, 14 thermal, 2,3 and piezoelectric. 9,35SEAs, surface electrochemical actuators; EAPs, electroactive polymers.electrochemical actuators or altering the heat capacity or conductivity of thermal actuators to increase the speed. Outlook for microactuators The features shared by these material platforms point to a bright future for microactuators.First, the actuators reviewed here are all made from materials that can be processed massively in parallel with lithographic techniques and deposited directly on top of prefabricated electronics.Combined with the fact that they use actuation voltages low enough to integrate directly with microelectronics, these actuators point to a possible future where mechanical elements can be seamlessly integrated with semiconductor electronics, enabling us to build tiny robots as easily as we build circuits. A core challenge in realizing this vision, at least in the near term, will be establishing best practices for fabrication and integration.Integrating actuators with electronics can still be difficult because of compatibility issues that arise in microfabrication.The works listed here demonstrate ways forward for each actuator class, but further exploration in this space could lift more fabrication constraints, enabling actuators to be added to broader classes of microsystems. Achieving these technical goals will help sustain the recent advances in small-scale robotics.Actuators and electronics processed lithographically already enable robot swarms of nearly 10,000 agents to be made massively in parallel and deployed all together. 7By integrating onboard semiconductor circuits to control actuation, microroboticists are now able to shift away from purely mechanical solutions to locomotion toward programmable, electronic ones. 27Finally, the overall cost of production for each machine remains at fractions of a penny even when complex circuits are included, inviting new applications for robots too small to see by eye in microfluidics, drug delivery, manufacturing, and materials science. Figure 2 . Figure 2. Examples of electronically controlled microactuators.(a) Actuators that operate at low voltage (sub 10 V) and high curvature (>1 mm −1 )can readily integrate with circuits at the microscale.In recent years, several classes of actuators have emerged that meet these demands, spanning operating voltages from ~100 mV to 3 V and curvatures up to 1 µm −1 .Broadly, actuators can be classified as electrochemical (bulk 4,5,13,14 and surface7,8 ), thermal, 2,3 and piezoelectric. 9(b) Examples of each class of actuators have been demonstrated at the microscale, including surface electrochemical actuators 7 (top panel), bulk electrochemical actuators using the lithiation of silicon to buckle microscale beams14 or the charging of polymer layers to control microgrippers4,5,13 (second from top), thermal actuators for microscale grippers 1 and origami 2 (second from bottom), and nanometer-thick aluminum nitride piezoelectric actuators 8 (bottom panel). Figure 3 . Figure3.Examples of microrobots with bending microactuators.(a) Microrobots with thermal microactuators made with nitinol shape-memory alloys (SMAs).12Direct laser actuation heats the hinges, causing them to bend and the robots to walk.These microrobots walk on land at speeds close to a body length per second and can be tracked with onboard retroreflectors.(b) Microrobots with surface electrochemical actuators and onboard digital control electronics.27Both the legs and the circuit on these robots are powered by light.The onboard microelectronic circuit generates clock signals to drive the legs and set the gait of the robots.These microrobots operate in aqueous environments, move at close to 0.1 body lengths per second, and can change behavior in response to optically delivered commands.PVs, photovoltaics; IC, integrated circuit; SEA, surface electrochemical actuator. Figure 4 . Figure 4. (a) Efficiency versus power consumption per square millimeter for actuators operating at 1 Hz.The dotted line shows the approximate power per millimeter square for a silicon photovoltaic (PV) in bright sunlight (given by 1 mW/mm 2 incident light intensity and assuming PV efficiency is about 10%).Even for microactuators with comparable efficiencies, power consumption can vary over almost six orders of magnitude.(b) Efficiency versus strain for microactuators shows a general relationship between the two: more efficient actuators operate at higher values of strain.In the case of electrochemical actuators, this result can be rationalized by looking at the dominant scaling behavior for electrical and mechanical energy.Moreover, if efficiency and strain scale together, then there are fundamental limits on actuator performance set by the elastic limits of the constituent materials.Indeed, vertical lines show where constituents for electrochemical actuators would begin to fail, indicating that within this class, further improvements in efficiency could be impossible without material innovation.Data are drawn from the following references: electrochemical bulk, 4,5,13 electrochemical surface, 7,8 battery, 14 thermal, 2,3 and piezoelectric.9SEAs, surface electrochemical actuators; EAPs, electroactive polymers. Figure 5 Figure 5. Force per square against actuator response time shows that microactuators can operate over a wide range, but evidently face a performance tradeoff.An engineer can currently choose between a fast, weak actuator or a slow strong one, but no actuator achieves both high force and fast response.For thermal and electrochemical actuators, these limits arise from transport constraints: an actuator needs to reach thermal (chemical) equilibrium to impart force.Future work could engineer these transport properties, thereby improving response time.Data are drawn from the following references: electrochemical bulk,4,5,13 electrochemical surface, 7,8 battery, 14 thermal, 2,3 and piezoelectric.9,35SEAs, surface electrochemical actuators; EAPs, electroactive polymers.
6,274.8
2024-02-21T00:00:00.000
[ "Materials Science", "Engineering" ]
C. elegans toxicant responses vary among genetically diverse individuals The genetic variability of toxicant responses among indisviduals in humans and mammalian models requires practically untenable sample sizes to create comprehensive chemical hazard risk evaluations. To address this need, tractable model systems enable reproducible and efficient experimental workflows to collect high-replication measurements of exposure cohorts. Caenorhabditis elegans is a premier toxicology model that has revolutionized our understanding of cellular responses to environmental pollutants and boasts robust genomic resources and high levels of genetic variation across the species. In this study, we performed dose-response analysis across 23 environmental toxicants using eight C. elegans strains representative of species-wide genetic diversity. We observed substantial variation in EC10 estimates and slope parameter estimates of dose-response curves of different strains, demonstrating that genetic background is a significant driver of differential toxicant susceptibility. We also showed that, across all toxicants, at least one C. elegans strain exhibited a significantly different EC10 or slope estimate compared to the reference strain, N2 (PD1074), indicating that population-wide differences among strains are necessary to understand responses to toxicants. Moreover, we quantified the heritability of responses (phenotypic variance attributable to genetic differences between individuals) to each toxicant exposure and observed a correlation between the exposure closest to the species-agnostic EC10 estimate and the exposure that exhibited the most heritable response. At least 20% of the variance in susceptibility to at least one exposure level of each compound was explained by genetic differences among the eight C. elegans strains. Taken together, these results provide robust evidence that heritable genetic variation explains differential susceptibility across an array of environmental pollutants and that genetically diverse C. elegans strains should be deployed to aid high-throughput toxicological screening efforts. Introduction Hazard risk assessment of environmental chemicals is a top priority of toxicological research. Over 350,000 chemicals are currently registered for use and production globally, of which tens of thousands are either confidential or ambiguously described (Wang et al., 2020). This staggering rate of production, paired with traditional means of hazard safety testing, which typically uses mammalian or cell-based methods of response evaluation, means that human populations are exposed to a complex array of xenobiotic compounds with virtually unknown risk levels. Although approaches to hazard risk assessments using mammalian systems have translational appeal, they often suffer from low statistical power because of necessarily limited sample sizes. These approaches are also time-consuming and economically costly (Tralau et al., 2012), drastically reducing their potential for thorough risk assessment of a growing, sometimes multifactorial, collection of chemical exposures (Brooks et al., 2020). Most importantly, meta-analyses estimate that rodent systems predict human toxic effects approximately 50% of the time (Hartung, 2009;Knight et al., 2009), suggesting that chemical risk assessment requires a more integrative approach. Caenorhabditis elegans is a free-living nematode that can be cheaply reared in large samples in a matter of days, vastly accelerating the pace and scale at which hazard risk evaluations can be performed compared to most vertebrate models. Furthermore, studies using C. elegans provide data from whole animals with intact neuromuscular, digestive, and sensory systems unlike popular in vitro systems. C. elegans is a powerful toxicology model that unites toxicologists with molecular geneticists so that expertise in routes of chemical exposure, internal dosage-specific effects, tissue distribution, and chemical metabolism is combined with expertise in DNA damage, oxidative and osmotic stress, and regulation of apoptosis and necrosis (Boyd et al., 2012;Hartman et al., 2021). All three phases of xenobiotic metabolism are present in C. elegans, though the conservation of specific gene families within each phase, such as the cytochromes P450, UDP-glucuronosyltransferases (UGTs), sulfotransferase enzymes (SULTs), and ATP-binding cassette (ABC) transporters (Hartman et al., 2021) have important differences. In addition to being inexpensive and easy to use, C. elegans responses to dozens of chemicals more accurately predict responses in rabbits and rats compared to zebrafish models (Boyd et al., 2016). Furthermore, metaanalyses indicate that rank-ordered toxicant sensitivity in several rodent models correlates with responses in C. elegans (Hunt, 2017). Finally, high-throughput approaches that measure phenotypic responses in C. elegans facilitate chemical screens in large populations at high replication (Andersen et al., 2015), providing a more facile and efficient risk assessment methodology that is a viable alternative to mammalian and cell-based systems. Therefore, toxicity assessments in C. elegans provide an alternative to vertebrate models with significantly greater scalability and potential to accelerate the characterization of molecular targets of chemical exposures. One approach to account for intra-and inter-species variation in toxicant responses is to use uncertainty factors (UFs) to translate a hazard's point of departure (POD) between species with distinct exposure routes and pharmacokinetic and pharmacodynamic capacities (Piersma et al., 2011). POD calculations alone fail to directly account for heritable genetic variation between individuals -variance in susceptibility that can be explained by genetic differences that segregate among individuals in a population (Zeise et al., 2013). Failing to account for these differences leads to UFs serving as an imprecise proxy for within-species variation in risk because the process is agnostic to observed ranges of susceptibility in genetically diverse individuals. Measuring hazard risk explicitly across many genetic backgrounds can provide a direct empirical assessment of the merit of UFs as a methodology for quantifying population-wide variability caused by genetics. Evaluations that can quantify the contributions of genetics to toxicant response variation lay the foundation for quantitative genetic dissection, with the specific goal of revealing novel mechanisms of toxicant susceptibility by identifying risk alleles. Wild strains of C. elegans harbor rich genetic variation (Andersen et al., 2012;Cook et al., 2017;Lee et al., 2021) and, by combining quantitative and molecular genetic approaches, offer the opportunity to discover genetic modifiers of toxicant susceptibility (Andersen et al., 2015;Bernstein et al., 2019;Evans et al., 2020;Zdraljevic et al., 2019). Quantifying the effects of genetics on toxicant susceptibility in C. elegans is an important step towards a full characterization of chemical hazard risk because the additive effects of conserved genes can help us understand novel toxicant response biology in humans. Additionally, the effects of these specific alleles can be dissected in C. elegans using genetic crosses and state-of-the-art molecular methods much faster than in mammalian systems. In this study, we performed dose-response analysis across 25 toxicants representing distinct chemical classes using eight strains of C. elegans representative of species-wide genetic diversity. We used a high-throughput imaging platform to assay development after exposing arrested first larval stage animals to each toxicant in a dose-dependent manner and used custom software (Di Tommaso et al., 2017;Nyaanga et al., 2021;Wählby et al., 2012) to measure phenotypic responses to each compound. By estimating dose-response curves for each toxicant and fitting strain-specific model parameters, we demonstrated that natural genetic variation is a key determinant of toxicant susceptibility in C. elegans. Moreover, we showed that the specific alleles that segregate between the eight strains in our cohort are responsible for heritable variation in toxicant susceptibility, which implies that quantitative genetic dissection of these responses has the potential to yield novel genetic loci underlying toxicant susceptibility. Taking these observations together, we propose that leveraging standing natural genetic variation in C. elegans is a powerful and complementary tool for high-throughput hazard risk assessments in translational toxicology. Strains The eight strains used in this study (PD1074, CB4856, MY16, RC301, ECA36, ECA248, ECA396, XZ1516) are available from the C. elegans Natural Diversity Resource (CeNDR) (Cook et al., 2017). Isolation details for the eight strains are included on CeNDR. Of the eight strains used, two (PD1074 and ECA248) are referred to by their isotype names (N2 and CB4855, respectively). Prior to measuring toxicant responses, all strains were grown at 20 °C on 6 cm plates made with modified nematode growth medium (NGMA) that contains 1% agar and 0.7% agarose to prevent animals from burrowing (Andersen et al., 2014). The NGMA plates were spotted with OP50 Escherichia coli as a nematode food source. All strains were propagated for three generations without starvation on NGMA plates prior to toxicant exposure. The specific growth conditions for nematodes used in the high-throughput toxicant response assay are described below (see Methods, High-throughput toxicant response assay). Nematode food preparation We prepared a single batch of HB101 E. coli as a nematode food source for all assays in this study. In brief, we streaked a frozen stock of HB101 E. coli onto a 10 cm Luria-Bertani (LB) agar plate and incubated it overnight at 37 °C. The following morning, we transferred a single bacterial colony into a culture tube that contained 5 ml of 1x Horvitz Super Broth (HSB). We then incubated that starter culture and a negative control (1X HSB without bacteria) for 18 h at 37 °C with shaking at 180 rpm. We then measured the OD 600 value of the starter culture with a spectrophotometer (BioRad, smartspec plus), calculated how much of the 18-h starter culture was needed to inoculate a one liter culture at an OD 600 value of 0.001, and used it to inoculate 14 4 L flasks that each contained one liter of pre-warmed 1x HSB. We grew those 14 cultures for 15 h at 37 °C with shaking at 180 rpm until they were in the early stationary growth phase (Supplemental Fig. 1A). We reasoned that food prepared from cultures grown to the early stationary phase (15 h) would be less variable than food prepared from cultures in the log growth phase. At 15 h, we removed the culture flasks from the incubator and transferred them to a 4 °C walk-in cold room to arrest growth. We then removed the 1X HSB from the cultures by three repetitions of pelleting the bacterial cells with centrifugation, disposing of the supernatant, and resuspending the cells in K medium. After the final wash, we resuspended the bacterial cells in K medium and transferred them to a 2 L glass beaker. We measured the OD 600 value of this bacterial suspension, diluted it to a final concentration of OD 600 100 with K medium, aliquoted it to 15 ml conicals, and froze the aliquots at −80 °C for use in the dose-response assays. Toxicant stock preparation We prepared stock solutions of the 25 toxicants using either dimethyl sulfoxide (DMSO) or water depending on the toxicant's solubility. The exact sources, catalog numbers, stock concentrations, and preparation notes for each of the toxicants are provided (Supplemental Table 1). Following preparation of the toxicant stock solutions, they were aliquoted to microcentrifuge tubes and stored at −20 °C for use in the dose-response assays. Exposure ranges were chosen for each chemical based on results from preliminary dose-response trials using only the N2 strain and six concentrations in order to narrow the exposure range for the larger eight strain experiments (data not shown). High-throughput toxicant dose-response assay For each replicate assay, populations of each strain were passaged for three generations, amplified, and bleach-synchronized in triplicate (Fig. 1A). We replicated the bleach synchronization to control for variation in embryo survival and subsequent effects on developmental rates that could be attributed to bleach effects (Porta-de-la-Riva et al., 2012) (Fig 2A). Following each bleach synchronization, we dispensed approximately 30 embryos into the wells of 96-well microplates in 50 μL of K medium (Boyd et al., 2012). We randomly assigned strains to rows of the 96-well microplates and varied the row assignments across the replicate bleaches. We prepared four replicate 96-well microplates within each of three bleach replicates for each toxicant and control condition tested in the assay. We then labeled the 96-well microplates, sealed them with gas permeable sealing film (Fisher Cat #14-222-043), placed them in humidity chambers, and incubated them overnight at 20 °C with shaking at 170 rpm (INFORS HT Multitron shaker). The following morning, we prepared food for the developmentally arrested first larval stage animals (L1s) using frozen aliquots of HB101 E. coli suspended in K medium at an optical density at 600 nm (OD 600 ) of 100 (see Methods, Nematode food preparation). We thawed the required number of OD 600 100 HB101 aliquots at room temperature, combined them into a single conical tube, diluted them to OD 600 30 with K medium, and added kanamycin at 150 μM to inhibit further bacterial growth and prevent contamination. Working with a single toxicant at a time, we then transferred a portion of the OD 600 30 food mix to a 12-channel reservoir, thawed an aliquot of toxicant stock solution at room temperature (see methods, Toxicant stock preparation), and diluted the toxicant stock to a working concentration. The toxicant working concentration was set to the concentration that would give the highest desired exposure when added to the 96-well microplates at 1% of the total well volume (the final concentration of the vehicle in all wells). We then performed a serial dilution of the toxicant working solution using the same diluent used to make the stock solution (Fig. 1C). The dilution factors ranged from 1.1 to 2 depending on the toxicant used, but all serial dilutions had 12 concentrations, including a 0 μM control. Concentrations were identified in a set of preliminary dose-response trials using just the N2 strain across a broader exposure range. Each control concentration was supplied at 1% of the total well volume in either water or DMSO. Using a 12-channel micropipette, we added the toxicant dilution series to the 12-channel reservoir containing the food mix at a 3% volume/volume ratio. Next, we transferred 25 μL of the OD 600 30 food and toxicant mix from the 12-channel reservoir into the appropriate wells of the 96-well microplates to simultaneously feed the arrested L1s at a final HB101 concentration of OD 600 10 and expose them to toxicant at one of 12 levels of the dilution series. We chose to feed at a final HB101 concentration of OD 600 10 because nematodes consistently developed to L4 larvae after 48 h of feeding at 20 °C (Supplemental Fig. 1B). Immediately after feeding, we sealed the 96-well microplates with a gas permeable sealing film (Fisher Cat #14-222-043), returned them to the humidity chambers, and started a 48-h incubation at 20 °C with shaking at 170 rpm. The remainder of the 96-well microplates were fed and exposed to toxicants in the same manner. After 48 h of incubation in the presence of food and toxicant, we removed the 96-well microplates from the incubator and treated the wells with sodium azide (325 μL of 50 mM sodium azide in 1X M9) for 10 min to paralyze and straighten the nematodes. We then immediately acquired images of nematodes in the microplates using a Molecular Devices ImageXpress Nano microscope (Molecular Devices, San Jose, CA) with a 2X objective (Fig. 1D). We used the images to quantify the development of nematodes in the presence of toxicants as described below (see Methods, Data collection, and Data cleaning). Data collection We wrote custom software packages designed to extract animal measurements from images collected on the Molecular Devices ImageXpress Nano microscope (Fig. 1E). CellProfiler is a widely used software program for characterizing and quantifying biological data from image-based assays (Carpenter et al., 2006;Kamentsky et al., 2011;McQuin et al., 2018). A collection of CellProfiler modules known as the WormToolbox were developed to extract morphological features of individual C. elegans animals from images from high-throughput C. elegans phenotyping assays like the one that we use here (Wählby et al., 2012). We estimated worm models and wrote custom CellProfiler pipelines using the WormToolbox in the GUI-based instance of CellProfiler. We then wrote a Nextflow pipeline (Di Tommaso et al., 2017) to run command-line instances of CellProfiler in parallel on the Quest High Performance Computing Cluster (Northwestern University) because each experimental block in this study produced many thousands of well images. This workflow can be found at https://github.com/AndersenLab/cellprofiler-nf. Our custom CellProfiler pipeline generates animal measurements by using four worm models: three worm models tailored to capture animals at the L4 larval stage, in the L2 and L3 larval stages, and the L1 larval stage, respectively, as well as a "multi-drug high dose" (MDHD) model, to capture animals with more abnormal body sizes caused by extreme toxicant responses. We used R/easyXpress (Nyaanga et al., 2021) to filter measurements from worm objects within individual wells that were statistical outliers using the function setFlags(), which identifies outlier animal measurements using Tukey's fences (Tukey, 1977). We then parsed measurements from multiple worm models down to single measurements for single animals using the modelSelection() function. These measurements comprised our raw dataset. Data cleaning All data management and statistical analyses were performed using the R statistical environment (version 4.0.4). Our high-throughput imaging platform produced thousands of images across each experimental block. It is unwieldy to manually curate each individual well image to assess the quality of animal measurement data. Therefore, we took several steps to clean the raw data using heuristics indicative of high-quality animal measurements suitable for downstream analysis. 1. We began by censoring experimental blocks for which the coefficient of variation (CV) of the number of animals in control wells was greater than 0.6 (Supplemental Fig. 2A). Experiments containing wells that meet this criterion in control wells are expected to produce less precise estimates of animal lengths in wells in which animals have been exposed to chemicals that typically increase the variance of the body length trait (Supplemental Fig. 2B). 2. We then reduced the data to wells containing between five and thirty animals, under the null hypothesis that the number of animals is an approximation of the expected number of embryos originally titered into wells (approximately 30). This filtering step screened for two problematic features of well images in our experiment. First, given that our analysis relied on well median animal length measurements, we excluded wells with fewer than five animals to reduce sampling error. Second, insoluble compounds or bacterial clumps were often identified as animals by CellProfiler (Supplemental Fig. 3) and would vastly inflate the well census and spuriously deflate the median animal length in wells containing high concentrations of certain toxicants. 3. After the previous two data processing steps, we removed statistical outlier measurements within each concentration for each strain for every toxicant to reduce the likelihood that statistical outliers influence dose-response curve fits. 4. Next, we removed measurements from all exposures of each toxicant that were no longer represented in at least 80% of the independent assays because of previous data filtering steps, or had fewer than 10 measurements per strain. 5. Finally, we normalized the data by (1) regressing variation attributable to assay and technical replicate effects and (2) normalizing these extracted residual values with respect to the average control phenotype. For each compound, we estimated a linear model using the raw phenotype measurement as the response variable and both assay and technical replicate identity as explanatory variables following the formula median_wormlength_um ~ Metadata_Experiment + bleach using the lm() function in base R. We then extracted the residuals from this linear model for each exposure and subtracted normalized phenotype measurements in each exposure from the mean normalized phenotype in control conditions. These normalized phenotype measurements were used in all downstream statistical analyses. LOAEL inference We determined the lowest observed adverse effect level (LOAEL) for each compound by performing a one-way analysis of variance using the normalized phenotype measurements as a response variable and toxicant dosage as an explanatory variable. We then performed a Tukey post hoc test, filtered to only comparisons to control exposures, and determined the lowest exposure that exhibited a significantly different phenotypic response as distinguished by an adjusted p-value less than 0.05. This analysis was performed on all phenotype measurements, as well as for each strain individually to determine if genetic background differences explain differences in LOAEL for each toxicant. Dose-response model estimation and statistics We estimated overall and strain-specific dose-response models for each compound by fitting a log-logistic regression model using R/drc (Ritz et al., 2015). The log-logistic model that we used specified four parameters: b, the slope of the dose-response curve; c, the upper asymptote of the dose-response curve; d, the lower asymptote of the dose-response curve; and e, the specified effective exposure. This model was fit to each compound using the drc::drm() function with strain specified as a covariate for parameters b and e, allowing us to estimate strain-specific dose-response slopes and effective exposures, as well as a specified lower asymptote d at −600, which is the theoretical normalized length of animals at the L1 larval stage. We used the drc::ED() function to extract strain-specific EC10 values, and extracted the strain-specific slope values using base R. We quantified the relative resistance to each compound across all each strain pairs based on their estimated EC10 values using the drc::EDcomp() function, which uses an approximate F-test to determine whether the variances (represented by delta-specified confidence intervals) calculated for each strain-specific dose-response model's e parameter estimates are significantly different. We quantified the relative slope steepness of dose-response models estimated for each strain within each compound using the drc::compParm() function, which uses a z-test to compare means of each b parameter estimate. Results shown are filtered to just comparisons against N2 dose-response parameters (Figs. 2 and 3), and significantly different estimates in both cases were determined by correcting to a family-wise type I error rate of 0.05 using Bonferroni correction. To determine whether strains were significantly more resistant or susceptible to more toxicants or chemical classes by chance, we conducted 1000 Fisher exact tests using the fisher.test() function with 2000 Monte Carlo simulations. Broad-sense and narrow-sense heritability calculations Phenotypic variance can be partitioned into variance caused by genetic differences or genetic variance (V G ) and residual variance explained by other factors (V E ). We extracted the among strain variance (V G ) and the residual variance (V E ) from the model and calculated broad-sense heritability (H 2 ) with the equation H 2 = VG / (V G +V E ). We estimated the H 2 using the lme4 (v1.1.27.1) R package to fit a linear mixed-effects model to the normalized phenotype data with strain as a random effect. Genetic variance (V G ) can be partitioned into additive (V A ) and non-additive (V NA ) variance components. Additive genetic variance is the amount of genetic variance that can be explained by the discrete collection of variants that differ in a specific population. Narrow-sense heritability (h 2 ) is defined as the ratio of additive genetic variance over the total phenotypic variance (V P ), i.e., h 2 = V A / V P . We generated a genotype matrix using the genomatrix profile of NemaScan, a GWAS analysis pipeline (Widmayer et al., 2022), using the variant call format (VCF) file generated in the latest CeNDR release (https://www.elegansvariation.org/data/release/latest). We then calculated h 2 using the sommer (v4.1.5) R package by calculating the variance-covariance matrix (M A ) from this genotype matrix using the sommer::A.mat function. We estimated V A using the linear mixed-effects model function sommer::mmer with strain as a random effect and M A as the covariance matrix. We then estimated h 2 and its standard error using the sommer::vpredict function. Data availability All code and data used to replicate the data analysis and figures presented are available for download at https://github.com/AndersenLab/toxin_dose_responses. Results We performed dose-response assessments using a microscopy-based high-throughput phenotyping assay (Fig. 1) for developmental delay in response to 25 toxicants belonging to five major chemical classes: metals (9), insecticides (8), herbicides (3), fungicides (4), flame retardants (1). Dose-response assessments for each compound were conducted using eight C. elegans strains representative of the genetic variation present across the species. We first quantified the population-wide lowest observed adverse effect level (LOAEL) for each compound (Supplemental Table 2). We then cleaned and normalized phenotype data in order to censor measurements obtained at problematic concentrations of various compounds and harmonized phenotypic responses across technical replicates (see Methods). Out of the 25 toxicants, twelve toxicants elicited variable LOAELs among the panel of strains: the insecticides aldicarb, chlorfenapyr, carbaryl, chlorpyrifos, and malathion; the fungicides pyraclostrobin and chlorothalonil; the metals manganese (II) chloride, methylmercury chloride, nickel chloride, and silver nitrate; and the flame retardant triphenyl phosphate (one-way ANOVA, Tukey HSD; p adj < 0.05). We next estimated dose-response curves for each compound to more precisely describe the contributions of genetic variation to different dynamics of susceptibility among strains (Fig. 1). To accomplish this step, we modeled four-parameter log-logistic dose-response curves for each compound using normalized median animal length as the phenotypic response. The slope (b) and effective concentration (e) parameters of each dose-response model were estimated using strain as a covariate, allowing us to extract strain-specific dose-response parameters. Undefined EC10 estimates (estimates greater than the maximum exposure) were observed for at least one strain from two compounds (chlorfenapyr and manganese(II) chloride). Additionally, we observed virtually uniform responses and high within-strain phenotypic variance across the dose-response curves of deltamethrin and malathion across all strains. We speculate that this high variance is in part driven by insoluble particles in culture wells that interfered with reliable inference of animal lengths and have consequently excluded these four compounds from further dose-response analyses (Supplemental Fig. 4). Dose-response models using strain as a covariate explained significantly more variation than those models without the strain covariate for the other 21 compounds (F-test; p < 0.001). We observed substantial variation in effective concentration between toxicants within classes of chemicals (Two-way ANOVA; p < 0.001) but not across strains (Two-way ANOVA; p ≥ 0.163) (Fig. 3A, Supplemental Table 3). All fungicides and herbicides exhibited significantly different EC10 estimates (two-way ANOVA, Tukey HSD; p adj ≤ 0.003). EC10 estimates for propoxur were not significantly different from aldicarb, nor were the estimates for methomyl compared to chlorpyrifos (two-way ANOVA, Tukey HSD; p adj ≥ 0.934) but EC10 estimates for all other compounds within the insecticide class were significantly different (two-way ANOVA, Tukey HSD; p adj ≤ 0.001). EC10 estimates for lead(II) nitrate were significantly different from all other tested metals (two-way ANOVA, Tukey HSD; p adj < 0.001). EC10 estimates for arsenic trioxide were significantly different from all tested metals (two-way ANOVA, Tukey HSD; p adj ≤ 0.050), except nickel chloride (two-way ANOVA, Tukey HSD; p adj = 0.068). EC10 estimates for all other metals were not significantly different from each other (two-way ANOVA, Tukey HSD; p adj ≥ 0.392). These results suggest that susceptibility to different toxicants in C. elegans is quite variable both between and within chemical classes. Most differences in EC10 were explained by differences among compounds of different classes. However, variation in EC10 estimates caused by genetic differences among strains were pervasive (Fig. 3B). In order to quantify these differences, we calculated the relative resistance to all compounds exhibited by each strain in pairwise comparisons of EC10 estimates among all strains (Supplemental Table 4). For example, comparing two strains with EC10 estimates of 5 μM and 10 μM in response to a chemical, the relative resistance of the second strain would equal 1. To contextualize these differences, we filtered down to comparisons between the reference strain N2 and all others and subsequently calculated the difference in potency with respect to the laboratory reference strain. In total, we observed 66 instances across 18 compounds where at least one strain was significantly more resistant or sensitive than the reference strain N2 using EC10 as a proxy (Student's t-test, Bonferroni correction; p adj < 0.05) with paraquat and propoxur being the exceptions (Fig. 3B). Twenty-two strain comparisons showed greater resistance than responses in the N2 strain, and 44 strain comparisons showed greater susceptibility across all compounds. Relative resistance was more generalized across strains, with four different strains exhibiting significant sensitivity to at least three toxicants with respect to the N2 strain. Of the instances in which a strain was significantly more sensitive than the N2 strain, 47.8% of the cases were either the ECA396 or MY16 strains, which were the two strains with the greatest number of compounds that elicited sensitivity. Furthermore, the observed frequency of strains with significantly greater toxicant sensitivity with respect to the N2 strain was significantly different than expected under the null (see Methods; Fisher's exact test; p < 0.05), suggesting that diverse C. elegans strains are not equally likely to be susceptible or resistant with respect to the commonly used reference strain N2. Strain-specific slope (b) estimates for each dose-response model varied substantially as well but followed different patterns than those estimates observed for EC10 (Fig. 4A, Supplemental Table 5). We again observed substantial variation in slope estimates between toxicants within chemical classes (two-way ANOVA; p < 0.001) but not across strains (two-way ANOVA; p ≥ 0.074). Slope estimates for pyraclostrobin were significantly lower than all other fungicides (two-way ANOVA, Tukey HSD; p adj ≤ 0.0002). Slope estimates for 2,4-D were significantly lower than those estimates for the other two herbicides (two-way ANOVA, Tukey HSD; p adj < 0.0001). Among insecticides, the only slope estimates that were not significantly different from each other were methomyl and aldicarb (two-way ANOVA, Tukey HSD; p adj = 0.999). Slope estimates for nickel chloride were significantly different from all other metals (two-way ANOVA, Tukey HSD; p adj ; ≤ 0.031). We next compared the relative steepness of dose-response slope estimates compared to the N2 reference strain, analogously to our EC10 relative potency analysis (all strain-by-strain comparisons can be found in Supplemental Table 6) and observed 76 significantly different slope steepness comparisons with the reference strain (Fig. 4B). The greatest number of significantly different slope estimates among strains were observed in insecticides, which comprised 24 (31%) of the comparisons. Four strains exhibited at least ten significantly different slope estimates (CB4855, CB4856, MY16, XZ1516), and five strains (CB4855, CB4856, ECA396, MY16, RC301) exhibited more instances of significantly shallower doseresponse slopes than N2. Furthermore, the number of significantly shallower dose-response slopes for each strain compared to the N2 strain was significantly different from that expected under the null (see Methods; Fisher's exact test; p = 0.041). Taken together, these results suggest that genetic differences between C. elegans strains mediate differential susceptibility and toxicodynamics across a diverse range of toxicants. In order to quantify the degree of phenotypic variation attributable to segregating genetic differences among strains, we first estimated the broad-sense heritability of the phenotypic response for each exposure of every compound. We observed a wide spectrum of broadsense and narrow-sense heritability estimates across compounds and exposure ranges (Fig. 5). Excluding control exposures, the average broad-sense heritability across all exposures of each compound ranged from 0.05 (atrazine) to 0.36 (chlorpyrifos), and narrow-sense heritability ranged from 0.05 (copper(II) chloride) to 0.37 (chlorpyrifos). Motivated by the wide range of additive genetic variance estimates that we observed across exposures of each compound, we asked how closely the exposures that exhibited the greatest narrow-sense heritability aligned with EC10s estimated for each compound. We compared the narrowsense heritabilities between the exposure closest to the estimated EC10 and the exposures that exhibited the maximum narrow-sense heritability for each of the 21 compounds with definitive EC10 estimates. We observed a strong relationship between the exposures that approximate the EC10 for each compound and the exposures that yielded the greatest narrow-sense heritability (Fig. 6). Interestingly, although the correlation between these two endpoints was strong, the dosage of each compound that exhibited the greatest additive genetic variance was always greater than the exposure that approximated the EC10 for that compound, demonstrating that the additive genetic variation responsible for the greatest differences in toxicant responses among C. elegans strains is typically revealed at greater exposure levels than the average estimated EC10. Discussion One of the central goals of toxicology is to achieve precise chemical risk assessments in populations characterized by diversity over broad socioeconomic, environmental, and genetic scales. At the level of initial screening in model organisms, these assessments have typically been limited to a single strain or cell line's genetic background. However, given the sheer number of uncharacterized toxicants being produced, it is economically infeasible to rely entirely on mammalian systems to rigorously evaluate these hazards on a reasonable time scale. Research using C. elegans as a model is a staple of toxicology, particularly when it comes to identifying key regulators of cellular responses to metal and pesticide exposures (Hartman et al., 2021;Hunt, 2017). However, these discoveries have typically relied on perturbing a single genome (and therefore a singular collection of "wild-type" alleles) using RNA interference or knockout alleles for individual genes. In this study, we expanded the scope of C. elegans-based chemical hazard evaluations to consider the effects of naturally occurring genetic variants in the C. elegans species by performing dose-response analysis using the N2 laboratory-adapted reference strain as well as seven wild strains representing the major axes of species-wide genetic variation. We conducted these analyses using a high-throughput microscopy assay that facilitates rigorous control over experimental noise, genetic effects, and toxic exposure across millions of C. elegans individuals from each of our eight genetic backgrounds. This paradigm allowed us to precisely estimate the effects of genetics on impaired development in the presence of a toxicant and tease them apart from experimental noise. Estimating toxic endpoints of chemical hazards has been previously executed using high-throughput screening of C. elegans responses (Boyd et al., 2012;Evans et al., 2018). In our study, we have leveraged and expanded on these types of platforms by explicitly estimating genetic effects on dose-response parameters. One goal of dose-response analysis is to identify a point of departure (POD) for exposure to a certain compound (e.g., a dosage at which a population begins to respond adversely to a hazard) based on empirical data. We demonstrated that EC10 estimates and slope parameters vary significantly between genetically distinct C. elegans strains and that, in fact, the N2 reference strain exhibits a significantly different dose-response profile than at least one other strain with respect to every toxicant we assessed. Additionally, strain-agnostic EC10 estimates are correlated with, but generally lower than, the exposure at which we observed the largest additive genetic variance. These observations suggest that previous analyses of toxicity in C. elegans might suffer from "genetic blindspots" in that significant intrinsic drivers of population-level toxicity are being systematically ignored, which then masks a source of complexity in toxicant susceptibility. For example, we observed that the strains ECA396 and MY16 are significantly more sensitive than other strains across more toxicants than expected by chance. The susceptibility profiles of these strains underscore the need to assess hazard risk across individuals that are intrinsically susceptible or resistant to understand the implications of dose-response endpoints. Because our high-throughput assay only reports the magnitude of developmental delay over one generation as a trait, it remains unknown whether the resistance that we observed in these strains, or for a given toxicant more broadly, extend to other toxicity endpoints (e.g., germline mutagenesis, effects on reproduction, metabolic signatures, or neurotoxicity). The toxicants in our study belong to classes of chemicals with documented effects on all these organ systems, so the identification of putatively resistant genetic backgrounds could represent fertile ground for the discovery of novel pathways that potentiate well characterized stress responses. An open question in toxicogenomics is the degree to which variation in human disease and development can be explained by our chemical environment, and whether these contributions exceed those from genetic differences among individuals. Our study suggests that for any given compound, we can find a dosage for which at least 20% of the variation in developmental delay can be explained by genetic differences between C. elegans strains. Furthermore, we show empirical support for the notion that toxic endpoints derived in experimental studies from one genetic background cannot be neatly translated across genetically diverse individuals. These findings build upon similar analyses conducted using human cell lines derived from the 1000 Genomes Project (Abdo et al., 2015), which revealed substantial heritability of dose-response endpoints. Given that high-throughput platforms exist that facilitate these analyses, stakeholders in toxicology (1) should prioritize the derivation of PODs derived in genetically diverse model organism populations and (2) should, to all extents possible, report heritability estimates of toxicant responses when multiple genetic backgrounds are used. These steps would ensure that they can precisely quantify this source of uncertainty in hazardous chemical evaluations. Given that the ranked susceptibility to toxicants is correlated between C. elegans and other mammalian systems (Hunt, 2017), high-throughput phenotyping systems provide a complementary platform for chemical hazard assessment that also accounts for genetic variability. Also, given the high heritability estimates of the compounds that we tested, quantitative genetic analyses such as genome-wide association studies in genetically diverse model organisms provide an opportunity to identify conserved genes that mediate population-level differences in toxicant susceptibility. Normalized length measurements for each strain at each toxicant exposure are shown on the y-axis, and the concentration of each toxicant is shown on the x-axis. Each dose-response curve is colored according to the strain. Does-response curves for each toxicant can be found in Supplemental Fig. 5.We observed a wide range of responses that can be combined into four general groups: A) subtle responses with little variation among strains, e.g., 2,4-D; B) subtle responses with moderate variation among strains, e.g., carbaryl; C) strong responses with little variation among strains, e.g., nickel chloride (though for nickel chloride, strain variation is high at high exposure levels, see Fig. 5); and D) strong responses with moderate variation among strains, e.g., pyraclostrobin. Fig. 3. Variation in EC10 estimates can be explained by genetic differences among strains. A) Strain-specific EC10 estimates for each toxicant are displayed for each strain. Standard errors for each strain-and toxicant-specific EC10 estimate are indicated by the line extending from each point. B) For each toxicant, each strain's relative resistance to that toxicant compared to the N2 strain is shown. Relative resistance above 1, for example, denotes an EC10 value 100% higher than the N2 strain. Solid points denote strains with significantly different relative resistance to that toxicant (F-test and subsequent Bonferroni correction with a p adj < 0.05, see Methods; Dose-response model estimation), and faded points denote strains not significantly different than the N2 strain. The broad category to which each toxicant belongs is denoted by the strip label for each facet. A) Strain-specific slope estimates for each toxicant are displayed for each strain. Standard errors for each strain-and toxicant-specific slope estimate are indicated by the line extending from each point. B) For each toxicant, the relative steepness of the dose-response slope inferred for that strain compared to the N2 strain is shown. Solid points denote strains with significantly different dose-response slopes (Student's t-test and subsequent Bonferroni correction with a p adj < 0.05, see Methods; Dose-response model estimation), and faded points denote strains without significantly different slopes than the N2 strain. The broad category to which each toxicant belongs is denoted by the strip label for each facet. The broad-sense (x-axis) and narrow-sense heritability (y-axis) of normalized animal length measurements was calculated for each concentration of each toxicant (Methods; Broad-sense and narrow-sense hentability calculations). The color of each cross corresponds to the log-transformed exposure for which those calculations were performed. The horizontal line of the cross corresponds to the confidence interval of the broad-sense heritability estimate obtained by bootstrapping, and the vertical line of the cross corresponds to the standard error of the narrow-sense heritability estimate. The log-transformed exposure that elicited the most heritable response to each toxicant (y-axis) is plotted against the log-transformed exposure of that same toxicant nearest to the inferred EC10 from the dose-response assessment. The exposure closest to the EC10 across all toxicants exhibited significant explanatory power to determine the exposure that elicited heritable phenotypic variation. Widmayer et al. Page 20 Toxicology. Author manuscript; available in PMC 2022 October 16.
9,203.6
2022-07-20T00:00:00.000
[ "Biology" ]
Human Joint Angle Estimation Using Deep Learning-Based Three-Dimensional Human Pose Estimation for Application in a Real Environment Human pose estimation (HPE) is a technique used in computer vision and artificial intelligence to detect and track human body parts and poses using images or videos. Widely used in augmented reality, animation, fitness applications, and surveillance, HPE methods that employ monocular cameras are highly versatile and applicable to standard videos and CCTV footage. These methods have evolved from two-dimensional (2D) to three-dimensional (3D) pose estimation. However, in real-world environments, current 3D HPE methods trained on laboratory-based motion capture data encounter challenges, such as limited training data, depth ambiguity, left/right switching, and issues with occlusions. In this study, four 3D HPE methods were compared based on their strengths and weaknesses using real-world videos. Joint position correction techniques were proposed to eliminate and correct anomalies such as left/right inversion and false detections of joint positions in daily life motions. Joint angle trajectories were obtained for intuitive and informative human activity recognition using an optimization method based on a 3D humanoid simulator, with the joint position corrected by the proposed technique as the input. The efficacy of the proposed method was verified by applying it to three types of freehand gymnastic exercises and comparing the joint angle trajectories during motion. Introduction The field of 3D motion analysis is rapidly evolving, particularly in sports, home fitness, and healthcare.Consequently, several advanced technologies are emerging in the market.According to the 2022 survey, the global 3D motion capture market is expected to generate USD 1.165 billion by 2033 [1]. There are two clear divisions in motion-capture technology: (i) marker/optical systems that often use infrared cameras and reflective markers and (ii) marker-less motion-capture (MLMC) systems, which are growing in popularity because of their lower costs and ease of use in less complex tasks, such as treadmill analysis during running.In contrast to traditional motion analyses, an MLMC system does not require markers on the body, thereby simplifying the process significantly.In particular, it can be utilized to identify neurological conditions, such as Parkinson's disease, by analyzing the body's walking patterns or gait [2].An alternative method for motion capture involves the use of inertial measurement units (IMU) that encompass accelerometers and gyroscopes [3].Although these sensors do not offer exhaustive data capture for full-body systems, they can effectively capture motion to a significant degree. Motion-capture technology is widely used for gait analysis in sports and is essential for activities involving running motion, such as sports medicine, to study athletic movements and identify dysfunctions related to injuries [4].This technology is crucial for understanding athlete success and handling complex injuries.Moreover, MLMC technology has been tested in a community setting and has proven particularly useful for the identification of neurological impairments and tracking rehabilitation progress. Accordingly, the technology for human pose estimation (HPE) using monocular camera sensors has witnessed rapid development.Monocular HPE is used to locate the 3D positions of human body joints in 2D images or videos.The existing studies can be divided into two categories: deterministic and probabilistic.The deterministic approaches in [5][6][7] produced a single definite 3D pose for each image, whereas the probabilistic approaches in [8][9][10] represented 2D to 3D lifting as a probability distribution and produced a set of possible solutions for each image.In [11], both approaches were combined by aggregating multiple-pose hypotheses into single and higher-quality 3D poses.The deterministic approach is more practical for real-world applications; thus, they are suitable for real-time HPE.Deterministic approaches rely on pixel-aligned 3D keypoints [6], mesh vertices [12], and mesh-aligned features [13] to obtain accurate HPE.Among these, pixel-aligned approaches exhibit high HPE accuracy; however, in deep learning (DL) methods, various challenges remain, including occluded areas and a lack of training data [14,15].To address these issues, a method has been proposed to alleviate the occlusion problem using sensor fusion [16] and multiple cameras [17].However, its application in real-world environments remains challenging. Although a previous study [18] comprehensively evaluated the performance of the latest 3D HPE algorithms, the performance evaluation of inference accuracy issues and inference time for previously known problems, such as occlusion, remains unclear.In this study, we focused on single-view single-person 3D HPE to identify problems when applied to motion recognition in various real-world videos using four 3D HPE methods: Medi-aPipe Pose (MPP) [5], Hybrid Inverse Kinematics solution (HybrIK) [6], Multi-Hypothesis Transformer (MHFormer) [10], and Diffusion-based 3D Pose Estimation (D3DP) [11] to analyze the problem.In addition, we proposed data-processing techniques to eliminate and correct anomalies, such as left/right joint position inversion and false detections in daily life motions.Finally, joint angle trajectories of a 3D humanoid simulator were obtained for intuitive and informative human activity recognition (HAR) using the univariate dynamic encoding algorithm for searches (uDEAS), which has been proven to be successful for 2D joint coordinates [19,20].Used as input, the 3D joint coordinate data were corrected by applying the proposed data-correction technique.If the accuracy of joint angle-based 3D HAR using a monocular camera becomes acceptable, it can be applied to a wide range of fields, such as recognizing hazardous behaviors in daily life, autonomous driving, personalized home care, metaverse, healthcare, and medical clinical rehabilitation therapy. Related Work According to recent research, the 3D HPE approach determines whether to reconstruct only the skeleton or recover the 3D human mesh using a skeleton and volumetric model [18].Figure 1 shows the 3D HPE framework configuration diagram commonly used in a singleview, single-person approach. Skeleton Model The human skeleton model is advantageous because it intuitively describes the structure of the human body using a tree structure that links the joints with lines.This model is used not only for 3D pose estimation but also for 2D pose estimation because of its simple structure, which reduces the computational cost and time.Previous studies can be classified into direct estimation and 2D to 3D lifting approaches. Skeleton Model The human skeleton model is advantageous because it intuitively describes the structure of the human body using a tree structure that links the joints with lines.This model is used not only for 3D pose estimation but also for 2D pose estimation because of its simple structure, which reduces the computational cost and time.Previous studies can be classified into direct estimation and 2D to 3D lifting approaches. The direct estimation approach involves a single step; it directly infers 3D joint locations from images or videos via an end-to-end network.A representative algorithm is MPP, an open-source library released by Google in 2020.MPP estimates 33 landmarks of the human body joints using the BlazePose model [21].Research analyzing motions in activities of daily living [19,20] and karate using MPP has recently gained momentum [22].MPP uses a detector-tracker ML pipeline.First, a pose detector identifies the region of interest (ROI) within an RGB image using facial landmarks to determine the presence of a person.Subsequently, a pose tracker infers 33 landmarks within the ROI. The 2D to 3D lifting approach comprises two steps.It estimates the 2D pose from the input images or videos and then the 3D joint locations.Representative approaches for 2D to 3D lifting approach include transformer-based and diffusion-based approaches.Diffusion models generate high-dimensional data through the gradual transformation of the data.This is a D3DP diffusion-based 3D HPE method proposed in 2023.First, the D3DP generates multiple possible 3D pose hypotheses for a single 2D observation.Second, it gradually diffuses the ground-truth 3D poses into a random distribution and learns a denoiser conditioned on 2D keypoints to recover the uncontaminated 3D poses.Third, joint-wise reproduction-based multi-hypothesis aggregation (JPMA) is used to combine multiple generated hypotheses into a single 3D pose.Consequently, it reprojects 3D pose hypotheses onto a 2D camera plane, selects the best hypothesis joint-by-joint based on reprojection errors, and combines the selected joints into the final pose [11].The transformer architecture, originally used primarily in natural language processing, is now also being applied in the field of computer vision.It learns the relationships between tokens extracted from images to infer 3D poses.In MHFormer, proposed in 2022, multi-hypothesis spatiotemporal feature structures are explicitly combined into transformer models, and the multiple hypotheses of body The direct estimation approach involves a single step; it directly infers 3D joint locations from images or videos via an end-to-end network.A representative algorithm is MPP, an open-source library released by Google in 2020.MPP estimates 33 landmarks of the human body joints using the BlazePose model [21].Research analyzing motions in activities of daily living [19,20] and karate using MPP has recently gained momentum [22].MPP uses a detector-tracker ML pipeline.First, a pose detector identifies the region of interest (ROI) within an RGB image using facial landmarks to determine the presence of a person.Subsequently, a pose tracker infers 33 landmarks within the ROI. The 2D to 3D lifting approach comprises two steps.It estimates the 2D pose from the input images or videos and then the 3D joint locations.Representative approaches for 2D to 3D lifting approach include transformer-based and diffusion-based approaches.Diffusion models generate high-dimensional data through the gradual transformation of the data.This is a D3DP diffusion-based 3D HPE method proposed in 2023.First, the D3DP generates multiple possible 3D pose hypotheses for a single 2D observation.Second, it gradually diffuses the ground-truth 3D poses into a random distribution and learns a denoiser conditioned on 2D keypoints to recover the uncontaminated 3D poses.Third, jointwise reproduction-based multi-hypothesis aggregation (JPMA) is used to combine multiple generated hypotheses into a single 3D pose.Consequently, it reprojects 3D pose hypotheses onto a 2D camera plane, selects the best hypothesis joint-by-joint based on reprojection errors, and combines the selected joints into the final pose [11].The transformer architecture, originally used primarily in natural language processing, is now also being applied in the field of computer vision.It learns the relationships between tokens extracted from images to infer 3D poses.In MHFormer, proposed in 2022, multi-hypothesis spatiotemporal feature structures are explicitly combined into transformer models, and the multiple hypotheses of body joint information attained in 2D to 3D lifting are independently and mutually processed in an end-to-end manner.The MHFormer is decomposed into three stages.First, multiple initial hypothesis representations are generated.Second, for model self-hypothesis communication, multiple hypotheses are merged into a single converged representation and then partitioned into several divergent hypotheses.Third, cross-hypothesis communication is learned, and multi-hypothesis features are aggregated to synthesize the final 3D pose [10]. Volumetric Model The human mesh recovery (HMR) technique, which represents the human body in a 3D mesh form from a single image, has gained attention in recent developments.This method involves reconstructing the human body as a 3D volumetric mesh model using an input image or video.A notable 3D mesh model used in this context is the skinned multiperson linear (SMPL) model [23].DL algorithms based on a 3D mesh model demonstrate improved accuracy in pose estimation by considering the body shape and rotation matrices, thus accounting for twisting movements.However, these algorithms incur high computational costs and long processing times.In addition, the limitations of the 3D joint coordinate datasets used for DL training often result in lower accuracy for untrained poses.Training on datasets containing 3D information typically involves capturing data in laboratory settings using motion-capture equipment.This can increase the likelihood of false detections in clothing and real-world environments.Previous studies based on HMR included Pose2Pose [24], HybrIK [6], and FrankMoCap [25].These research efforts reflect the ongoing challenges and developments in the field of 3D human pose estimation.HybrIK, proposed in 2020, is an inverse kinematics solution that considers the volume of a human body in 3D.Previous estimation methods based on HMR reconstructed a 3D mesh by estimating multiple parameters.However, the learning of abstract parameters can degrade the model's performance.Thus, HybrIK employs an inverse kinematics approach to bridge the gap between mesh and 3D skeletal coordinate estimation.It supports two models: SMPL [23] and SMPL-X [26]. Pose-Estimation Methods In this study, we implemented and compared the end-to-end deep learning models MPP and HybrIK and the hybrid models MHFormer and D3DP.The implementation environments for these methods are presented in Table 1.In addition, Figure 2 shows the landmark locations for each algorithm. joint information attained in 2D to 3D lifting are independently and mutually processed in an end-to-end manner.The MHFormer is decomposed into three stages.First, multiple initial hypothesis representations are generated.Second, for model self-hypothesis communication, multiple hypotheses are merged into a single converged representation and then partitioned into several divergent hypotheses.Third, cross-hypothesis communication is learned, and multi-hypothesis features are aggregated to synthesize the final 3D pose [10]. Volumetric Model The human mesh recovery (HMR) technique, which represents the human body in a 3D mesh form from a single image, has gained attention in recent developments.This method involves reconstructing the human body as a 3D volumetric mesh model using an input image or video.A notable 3D mesh model used in this context is the skinned multiperson linear (SMPL) model [23].DL algorithms based on a 3D mesh model demonstrate improved accuracy in pose estimation by considering the body shape and rotation matrices, thus accounting for twisting movements.However, these algorithms incur high computational costs and long processing times.In addition, the limitations of the 3D joint coordinate datasets used for DL training often result in lower accuracy for untrained poses.Training on datasets containing 3D information typically involves capturing data in laboratory settings using motioncapture equipment.This can increase the likelihood of false detections in clothing and realworld environments.Previous studies based on HMR included Pose2Pose [24], HybrIK [6], and FrankMoCap [25].These research efforts reflect the ongoing challenges and developments in the field of 3D human pose estimation.HybrIK, proposed in 2020, is an inverse kinematics solution that considers the volume of a human body in 3D.Previous estimation methods based on HMR reconstructed a 3D mesh by estimating multiple parameters.However, the learning of abstract parameters can degrade the model's performance.Thus, HybrIK employs an inverse kinematics approach to bridge the gap between mesh and 3D skeletal coordinate estimation.It supports two models: SMPL [23] and SMPL-X [26]. Pose-Estimation Methods In this study, we implemented and compared the end-to-end deep learning models MPP and HybrIK and the hybrid models MHFormer and D3DP.The implementation environments for these methods are presented in Table 1.In addition, Figure 2 shows the landmark locations for each algorithm.To compare the accuracy of the DL models in real-world environments, we conducted a comparison using video footage.Specifically, RGB videos recorded at a resolution of 1280 × 720 pixels and 30 FPS were used as inputs for the DL models.The selected input videos included complex postures, scenes with objects resembling human figures, footage recorded from a distance, and videos captured under various lighting conditions.This section analyzes the limitations and challenges of deep-learning models in real-world environments. Figure 3 shows the first video, featuring a complex yoga pose with intertwined human joints.In Figure 3a,b, the skeleton models estimated using MPP and MHFormer were overlaid.The area wherein a person was recognized was marked using bounding boxes (BBs).Even for a single image, the model accurately detected the upward bending of the left leg.This demonstrates the effectiveness of the MPP and MHFormer in complex pose recognition.Figure 3c,d show the estimation results in which the skeletal model, estimated to be D3DP, and the SMPL model, estimated to be HybrIK, were overlaid on the image.The algorithm accurately estimates the leg positions within a certain angle range.However, as the complexity of the pose increased, the estimation accuracy decreased. Performance Comparison in Real-World Environments To compare the accuracy of the DL models in real-world environments, we conducted a comparison using video footage.Specifically, RGB videos recorded at a resolution of 1280 × 720 pixels and 30 FPS were used as inputs for the DL models.The selected input videos included complex postures, scenes with objects resembling human figures, footage recorded from a distance, and videos captured under various lighting conditions.This section analyzes the limitations and challenges of deep-learning models in real-world environments. Figure 3 shows the first video, featuring a complex yoga pose with intertwined human joints.In Figure 3a,b, the skeleton models estimated using MPP and MHFormer were overlaid.The area wherein a person was recognized was marked using bounding boxes (BBs).Even for a single image, the model accurately detected the upward bending of the left leg.This demonstrates the effectiveness of the MPP and MHFormer in complex pose recognition.Figure 3c,d show the estimation results in which the skeletal model, estimated to be D3DP, and the SMPL model, estimated to be HybrIK, were overlaid on the image.The algorithm accurately estimates the leg positions within a certain angle range.However, as the complexity of the pose increased, the estimation accuracy decreased.Next, we compared the estimation results for a person riding a bicycle.Figure 4 shows an image of a person cycling shot from the side, which includes occlusion areas where certain joints were obscured and external objects were present.In Figure 4a-c, MPP, MHFormer, and D3DP accurately identified the joints of a person without mistaking the bicycle as a human figure, respectively.However, as shown in Figure 4d, HybrIK attempted to misidentify a bicyclist as a person and estimate the 3D posture.Thus, the estimated SMPL model deviated significantly from that of the target person.Next, we compared the estimation results for a person riding a bicycle.Figure 4 shows an image of a person cycling shot from the side, which includes occlusion areas where certain joints were obscured and external objects were present.In Figure 4a-c, MPP, MHFormer, and D3DP accurately identified the joints of a person without mistaking the bicycle as a human figure, respectively.However, as shown in Figure 4d, HybrIK attempted to misidentify a bicyclist as a person and estimate the 3D posture.Thus, the estimated SMPL model deviated significantly from that of the target person. Performance Comparison in Real-World Environments To compare the accuracy of the DL models in real-world environments, we conducted a comparison using video footage.Specifically, RGB videos recorded at a resolution of 1280 × 720 pixels and 30 FPS were used as inputs for the DL models.The selected input videos included complex postures, scenes with objects resembling human figures, footage recorded from a distance, and videos captured under various lighting conditions.This section analyzes the limitations and challenges of deep-learning models in real-world environments. Figure 3 shows the first video, featuring a complex yoga pose with intertwined human joints.In Figure 3a,b, the skeleton models estimated using MPP and MHFormer were overlaid.The area wherein a person was recognized was marked using bounding boxes (BBs).Even for a single image, the model accurately detected the upward bending of the left leg.This demonstrates the effectiveness of the MPP and MHFormer in complex pose recognition.Figure 3c,d show the estimation results in which the skeletal model, estimated to be D3DP, and the SMPL model, estimated to be HybrIK, were overlaid on the image.The algorithm accurately estimates the leg positions within a certain angle range.However, as the complexity of the pose increased, the estimation accuracy decreased.Next, we compared the estimation results for a person riding a bicycle.Figure 4 shows an image of a person cycling shot from the side, which includes occlusion areas where certain joints were obscured and external objects were present.In Figure 4a-c, MPP, MHFormer, and D3DP accurately identified the joints of a person without mistaking the bicycle as a human figure, respectively.However, as shown in Figure 4d, HybrIK attempted to misidentify a bicyclist as a person and estimate the 3D posture.Thus, the estimated SMPL model deviated significantly from that of the target person.The third video was shot from a distance and featured an individual in a pitching stance.Owing to the camera angle, certain joints of the person were located in self-occluded areas.Figure 5a-d present the estimation results for MPP, MHFormer, D3DP, and HybrIK, respectively.The four methods demonstrated good estimation accuracy for this scenario, indicating their effectiveness in addressing challenges such as distant subjects, particularly in capturing and analyzing the posture of a person engaged in a specific activity such as pitching. Sensors 2024, 24, x FOR PEER REVIEW 6 of 22 The third video was shot from a distance and featured an individual in a pitching stance.Owing to the camera angle, certain joints of the person were located in self-occluded areas.Figure 5a-d However, all four methods yielded unsatisfactory results for occluded areas.Inaccurate estimates of the occluded areas were observed in certain frames, as shown in Figure 6.The final case involved a video with varying light intensities because of shadows.In real-world environments, sunlight or artificial lighting can cause shadows, and people often wear clothing with various patterns.This can result in frequent and abrupt changes in the color of RGB images. Figure 7 presents the estimation results of the MPP, which demonstrates its capability to estimate poses, even from the back of a person.However, there are frequent occurrences of coordinate inversions on the left and right sides resulting from changes in lighting conditions.However, all four methods yielded unsatisfactory results for occluded areas.Inaccurate estimates of the occluded areas were observed in certain frames, as shown in Figure 6. Sensors 2024, 24, x FOR PEER REVIEW 6 of 22 The third video was shot from a distance and featured an individual in a pitching stance.Owing to the camera angle, certain joints of the person were located in self-occluded areas.Figure 5a-d However, all four methods yielded unsatisfactory results for occluded areas.Inaccurate estimates of the occluded areas were observed in certain frames, as shown in Figure 6.The final case involved a video with varying light intensities because of shadows.In real-world environments, sunlight or artificial lighting can cause shadows, and people often wear clothing with various patterns.This can result in frequent and abrupt changes in the color of RGB images. Figure 7 presents the estimation results of the MPP, which demonstrates its capability to estimate poses, even from the back of a person.However, there are frequent occurrences of coordinate inversions on the left and right sides resulting from changes in lighting conditions.The final case involved a video with varying light intensities because of shadows.In real-world environments, sunlight or artificial lighting can cause shadows, and people often wear clothing with various patterns.This can result in frequent and abrupt changes in the color of RGB images. Figure 7 presents the estimation results of the MPP, which demonstrates its capability to estimate poses, even from the back of a person.However, there are frequent occurrences of coordinate inversions on the left and right sides resulting from changes in lighting conditions. Sensors 2024, 24, x FOR PEER REVIEW 6 of 22 The third video was shot from a distance and featured an individual in a pitching stance.Owing to the camera angle, certain joints of the person were located in self-occluded areas.Figure 5a-d However, all four methods yielded unsatisfactory results for occluded areas.Inaccurate estimates of the occluded areas were observed in certain frames, as shown in Figure 6.The final case involved a video with varying light intensities because of shadows.In real-world environments, sunlight or artificial lighting can cause shadows, and people often wear clothing with various patterns.This can result in frequent and abrupt changes in the color of RGB images. Figure 7 presents the estimation results of the MPP, which demonstrates its capability to estimate poses, even from the back of a person.However, there are frequent occurrences of coordinate inversions on the left and right sides resulting from changes in lighting conditions.Figure 8 presents the estimation results for HybrIK, MHFormer, and D3DP.Similar to the MPP, all methods accurately estimated the pose of a person from the back.Furthermore, it was more robust in handling changes in light intensity compared to the MPP, providing more stable estimation results under varying lighting conditions.This indicated a certain level of resilience to environmental lighting changes, which is crucial for practical applications in diverse real-world scenarios. Sensors 2024, 24, x FOR PEER REVIEW 7 of 22 Figure 8 presents the estimation results for HybrIK, MHFormer, and D3DP.Similar to the MPP, all methods accurately estimated the pose of a person from the back.Furthermore, it was more robust in handling changes in light intensity compared to the MPP, providing more stable estimation results under varying lighting conditions.This indicated a certain level of resilience to environmental lighting changes, which is crucial for practical applications in diverse real-world scenarios.Summarizing the results thus far in real-world environments, each DL HPE algorithm generally performed well in human recognition and joint detection.However, in real-world environments, we observed inaccurate estimation results in certain frames when various objects, light changes, or occlusions were present.In particular, a decrease in joint position estimation accuracy in complex intertwined postures, such as yoga postures, was observed.Furthermore, in the case of MPP, a left-right switching phenomenon was observed, and Hy-brIK showed a decrease in joint detection accuracy owing to human recognition errors. Finally, it is noteworthy that the four HPE methods did not produce joint angles.To move beyond person recognition from 2D images to recognizing and predicting 3D human actions, accurate estimation of joint angles is essential.Therefore, this study aimed to further investigate the removal and correction of anomalies in DL models to develop a method that would improve the accuracy and applicability of HPE using an optimization method that determines the angles of each joint with reference to a 3D humanoid model. Improving Human Recognition Accuracy Improving the human recognition performance necessitates the enhancement in the accuracy of DL models.In real-world environments, video data often contain multiple people and objects resembling human figures.Thus, for 3D HPE technology that focuses on a single person to be effectively utilized in real environments, it is crucial that the individual in a video is continuously recognized. A comparative study revealed instances wherein non-human objects were mistakenly recognized as humans.Originally, HybrIK utilized the fasterrcnn_resnet50_fpn algorithm [27] provided by PyTorch for rapid object detection in images, and the detected region of interest was then input into the HybrIK model.However, this method only estimates the object with the highest recognition score and largest area among those recognized in the image without specifically focusing on human figures.Consequently, non-human objects, such as bicycles, are misidentified as humans, resulting in inaccurate estimations using the HybrIK model.In this study, to improve the recognition accuracy, fasterrcnn_resnet50_fpn trained on the COCO dataset was applied to HybrIK.We used the 2017 version of the COCO dataset [28].Excluding the background, 11 of the 91 categories in the dataset were omitted, and the classification was conducted on 80 objects.This enhancement aimed to refine the Summarizing the results thus far in real-world environments, each DL HPE algorithm generally performed well in human recognition and joint detection.However, in real-world environments, we observed inaccurate estimation results in certain frames when various objects, light changes, or occlusions were present.In particular, a decrease in joint position estimation accuracy in complex intertwined postures, such as yoga postures, was observed.Furthermore, in the case of MPP, a left-right switching phenomenon was observed, and HybrIK showed a decrease in joint detection accuracy owing to human recognition errors. Finally, it is noteworthy that the four HPE methods did not produce joint angles.To move beyond person recognition from 2D images to recognizing and predicting 3D human actions, accurate estimation of joint angles is essential.Therefore, this study aimed to further investigate the removal and correction of anomalies in DL models to develop a method that would improve the accuracy and applicability of HPE using an optimization method that determines the angles of each joint with reference to a 3D humanoid model. Improving Human Recognition Accuracy Improving the human recognition performance necessitates the enhancement in the accuracy of DL models.In real-world environments, video data often contain multiple people and objects resembling human figures.Thus, for 3D HPE technology that focuses on a single person to be effectively utilized in real environments, it is crucial that the individual in a video is continuously recognized. A comparative study revealed instances wherein non-human objects were mistakenly recognized as humans.Originally, HybrIK utilized the fasterrcnn_resnet50_fpn algorithm [27] provided by PyTorch for rapid object detection in images, and the detected region of interest was then input into the HybrIK model.However, this method only estimates the object with the highest recognition score and largest area among those recognized in the image without specifically focusing on human figures.Consequently, non-human objects, such as bicycles, are misidentified as humans, resulting in inaccurate estimations using the HybrIK model.In this study, to improve the recognition accuracy, fasterrcnn_resnet50_fpn trained on the COCO dataset was applied to HybrIK.We used the 2017 version of the COCO dataset [28].Excluding the background, 11 of the 91 categories in the dataset were omitted, and the classification was conducted on 80 objects.This enhancement aimed to refine the object-detection process by focusing specifically on human figures and reducing the likelihood of misidentifying nonhuman objects as people. First, the target person for the analysis was identified in the first frame.The object recognition algorithm predicted significantly more BB than the actual objects.Therefore, to adopt the most accurate BB for human recognition, the following steps were performed to eliminate unnecessary BBs. 1. All BBs with confidence scores below a certain threshold were removed. where Conf i is the confidence score associated with BB i. 2. All detected BBs that were not identified as humans were removed. Finally, the ROI to be analyzed was determined.Among the remaining BBs, only the one with the largest area was retained, and the rest were removed. If there were no BBs with confidence scores above the threshold in the first frame, the threshold was adjusted and the process was repeated.This approach ensured that the most probable human figure was selected for analysis, thereby enhancing the accuracy of subsequent pose estimation. Once the subject for analysis was determined, the information from the previous frame was used to continuously recognize the target.The process is as follows. 1. In the current frame, all BBs that were not identified as humans were removed; 2. The intersection over union (IoU) between the BB recognized in the previous frame and BBs in the current frame was calculated.The IoU is a common measure used in object detection to assess the similarity between two sets.This is calculated as the ratio of the intersection area (bboxArea inter ) of the recognized regions in the current (bboxArea cur ) and previous (bboxArea prev ) frames to their union areas, as expressed below: IoU bbox prev , bbox cur = bboxArea inter bboxArea prev + bboxArea cur − bboxArea inter (4) 3. Finally, the BB with the highest sum of the confidence and IoU scores was adopted. This approach ensured continuous and accurate tracking of the target person across frames, leveraging the similarity of the detected regions between consecutive frames and the detection reliability. where IoU max = max i IoU BB prev , BB i .Figure 9 illustrates the results of applying the proposed human recognition algorithm to the HybrIK model, which improved the accuracy of human pose estimation compared with previous results. Detection of Outliers In real-world environments, the issues commonly encountered in human model recognition can be categorized as jitter, switching, and misdetection [15].Therefore, the detection and correction of outliers are essential.In this study, a 3D joint coordinate correction step was conducted to address the shortcomings typically associated with DL-based human pose-estimation algorithms and to improve accuracy. When capturing movements using a single monocular camera, areas of occlusion occur because of the fixed field of view.These occlusion areas can be categorized as self-occlusions, wherein certain joints are obscured by the body, or external occlusions, wherein the joints are obscured by external objects.Capturing the same pose with different camera positions obscures different joints.In these occluded areas, low-confidence estimates are often generated using DL models. Another challenging factor in pose estimation from RGB images is the variation in lighting intensity.Irregular changes in lighting can lead to left or right inversions or sudden coordinate distortions.In this study, inaccurate estimations of occlusion areas commonly encountered with DL models and the phenomenon of left or right switching were defined as outliers, and detection and correction were performed.Mis-detection in occluded areas mostly involves the inclusion of end joints. Symmetrical inversion of the shape of the human body occurs when the left and right sides are switched, often around the center points of the pelvis or the joints at the centers of the shoulders and pelvis.Figure 10 illustrates the 3D coordinate trajectory of the right shoulder with outliers, with the segments affected by the outliers marked in red area.The 3D coordinate trajectory was extracted according to the world coordinates provided by MPP [5].These coordinates were normalized to meters from the center of the hip to the origin.In such cases, all joints must be adjusted because they can occur across the entire body. Detection of Outliers In real-world environments, the issues commonly encountered in human model recognition can be categorized as jitter, switching, and misdetection [15].Therefore, the detection and correction of outliers are essential.In this study, a 3D joint coordinate correction step was conducted to address the shortcomings typically associated with DLbased human pose-estimation algorithms and to improve accuracy. When capturing movements using a single monocular camera, areas of occlusion occur because of the fixed field of view.These occlusion areas can be categorized as selfocclusions, wherein certain joints are obscured by the body, or external occlusions, wherein the joints are obscured by external objects.Capturing the same pose with different camera positions obscures different joints.In these occluded areas, low-confidence estimates are often generated using DL models. Another challenging factor in pose estimation from RGB images is the variation in lighting intensity.Irregular changes in lighting can lead to left or right inversions or sudden coordinate distortions.In this study, inaccurate estimations of occlusion areas commonly encountered with DL models and the phenomenon of left or right switching were defined as outliers, and detection and correction were performed.Mis-detection in occluded areas mostly involves the inclusion of end joints. Symmetrical inversion of the shape of the human body occurs when the left and right sides are switched, often around the center points of the pelvis or the joints at the centers of the shoulders and pelvis.Figure 10 illustrates the 3D coordinate trajectory of the right shoulder with outliers, with the segments affected by the outliers marked in red area.The 3D coordinate trajectory was extracted according to the world coordinates provided by MPP [5].These coordinates were normalized to meters from the center of the hip to the origin.In such cases, all joints must be adjusted because they can occur across the entire body. In this study, outliers were detected through changes in the lengths of 10 major links, including the shoulder, pelvis, thigh, shin, upper arm, and lower arm.The link length was calculated as the Euclidean distance between two joints in a 3D pixel coordinate system.The measured link lengths in a pixel frame differed depending on the distance from the camera.Therefore, through the key information measured in the pixel image, the link length measured in the pixel coordinate system was converted into centimeters.At this time, the average height information of Size Korea [29] was used to normalize the average height of women and men, 160 and 175 cm. Figure 11 illustrates the changes in the lengths of the 10 links for each frame during the walking motion, as shown in Figure 10.Joint lengths vary linearly when a person performs dynamic motion.The proposed algorithm differentiated the length changes per frame and detected nonlinear segments.Figure 12 presents the results of differentiating the lengths of the 10 links shown in Figure 11, where the red areas indicate cases of left or right inversion, and the blue areas represent instances of partial joint misdetection. sides are switched, often around the center points of the pelvis or the joints at the centers of the shoulders and pelvis.Figure 10 illustrates the 3D coordinate trajectory of the right shoulder with outliers, with the segments affected by the outliers marked in red area.The 3D coordinate trajectory was extracted according to the world coordinates provided by MPP [5].These coordinates were normalized to meters from the center of the hip to the origin.In such cases, all joints must be adjusted because they can occur across the entire body.In this study, outliers were detected through changes in the lengths of 10 major links, including the shoulder, pelvis, thigh, shin, upper arm, and lower arm.The link length was calculated as the Euclidean distance between two joints in a 3D pixel coordinate system.The measured link lengths in a pixel frame differed depending on the distance from the camera.Therefore, through the key information measured in the pixel image, the link length measured in the pixel coordinate system was converted into centimeters.At this time, the average height information of Size Korea [29] was used to normalize the average height of women and men, 160 and 175 cm. Figure 11 illustrates the changes in the lengths of the 10 links for each frame during the walking motion, as shown in Figure 10.Joint lengths vary linearly when a person performs dynamic motion.The proposed algorithm differentiated the length changes per frame and detected nonlinear segments.Figure 12 presents the results of differentiating the lengths of the 10 links shown in Figure 11, where the red areas indicate cases of left or right inversion, and the blue areas represent instances of partial joint misdetection. Outlier Correction In this study, we proposed an outlier detection and correction method, the structure of which is shown in Figure 13, using the lengths of the human body links. Outlier Correction In this study, we proposed an outlier detection and correction method, the structure of which is shown in Figure 13, using the lengths of the human body links. Outlier Correction In this study, we proposed an outlier detection and correction method, the structure of which is shown in Figure 13, using the lengths of the human body links.Figure 14 shows the outlier correction process for the 3D coordinates of the right shoulder.First, variations in the lengths of the major links were analyzed to observe any changes.Figure 14a shows the length variations of the right shoulder used to detect outliers.Segments with detected outliers are marked in red.The solid lines in Figure 14b,c represent the corrected data after outlier removal, whereas the dashed red line represents the original data.In Figure 14b, the removed segments are interpolated employing the mean interpolation method, using the average values of the frames before and after the outlier segments.As shown in Figure 14c, a median filter was applied to smooth the corrected trajectories.The application of the filter minimized the frame-by-frame errors in the DL model.Figure 14 shows the outlier correction process for the 3D coordinates of the right shoulder.First, variations in the lengths of the major links were analyzed to observe any changes.Figure 14a shows the length variations of the right shoulder used to detect outliers.Segments with detected outliers are marked in red.The solid lines in Figure 14b,c represent the corrected data after outlier removal, whereas the dashed red line represents the original data.In Figure 14b, the removed segments are interpolated employing the mean interpolation method, using the average values of the frames before and after the outlier segments.As shown in Figure 14c, a median filter was applied to smooth the corrected trajectories.The application of the filter minimized the frame-by-frame errors in the DL model. Joint Angle Estimation To estimate the joint angle trajectories of human motion, a 3D humanoid robot model and an optimization algorithm were employed using the joint coordinate trajectories corrected using the data-processing algorithm described previously.The uDEAS was selected as the optimization method because of its high speed and accuracy, as proven in a previous Joint Angle Estimation To estimate the joint angle trajectories of human motion, a 3D humanoid robot model and an optimization algorithm were employed using the joint coordinate trajectories corrected using the data-processing algorithm described previously.The uDEAS was selected as the optimization method because of its high speed and accuracy, as proven in a previous study [30], and the modified version of combinatorial DEAS (cDEAS), which can seek integer variables as well [31]. uDEAS is a global optimization method that combines local and global search schemes by representing real numbers in binary matrices using the decoding function in [31].In the local search, a session comprising a single bisectional search (BSS) and multiple unidirectional searches (UDS) is sequentially executed for each row from the first to the last variable.The BSS adds a new bit at the rightmost position, and the UDS increases or decreases each binary row (the encoded representation for each variable) depending on the BSS result.With respect to the global optimization scheme, the uDEAS restarts the local search procedure using random binary matrices.Among the local minima identified, the minimum cost function was selected as the global minimum. As the number of optimization variables increases, searching for them sequentially in a predetermined order in a local search becomes less efficient.To address this, we proposed an adaptive variable-ordering strategy for the uDEAS that prioritized the exploration of variables based on their sensitivity to the cost function.To this end, the cost-sensitivity function of the ith variable in the jth session, v j i , was designed as follows: where L is the cost function, v j,l/r i,BSS and v j,k i,UDS denote v j i at the left or right BSS and kth iteration of the UDS, respectively, and M is the number of successful UDS iterations following which the cost no longer decreases.Figure 15a shows an example of a session with the sequential search starting from a binary matrix [10; 01; 00] in the order of v 1 → v 2 → v 3 , and Figure 15b shows a session with the cost-sensitivity-based search scheme from the same binary matrix in the order of In each session, the sensitivity values for the optimization variables were calculated and passed to the next session to determine the search order. In this study, during the optimization process, a set of candidate joint angle variables was fed into the humanoid model, which simulated a 3D pose.The objective was to determine the joint angle values that minimized the Euclidean distance between the coordinates of each simulated joint and the corresponding measured joints. The humanoid model has a total of 26 degrees of freedom (DoF), including transversal shoulder joints and a coronal neck joint, as shown in Figure 16, compared with the recent model in [20], where the humanoid model was described with links and joints based on the Denavit-Hartenberg (DH) method [32] with the origin of the reference frame located at the center of the body to create arbitrary poses.Then, 3 DoF lumbar spine joints were added at the center of the pelvis to realize separate upper body motions, and the rotational polarity of all joint variables was defined following Vicon motion capture system [33].In the figure, the shaded orange variables represent the 17 joint angles used for HPE, and the 3 variables, θ bd , ϕ bd , and ψ bd , are the body angle values related to the relative camera view angle, where θ, ϕ, and ψ denote joint angles rotating on the sagittal, coronal, and transverse planes, respectively.To estimate the arbitrary poses at any distance from the camera, a size factor, γ, is necessary, which is multiplied by each link length.Thus, as the camera moves away from an individual, γ decreases, and vice versa.Therefore, the complete optimization vector for pose estimation comprised the following 21 variables: V = γ, θ bd , ϕ bd , ψ bd , θ ws , ϕ ws , ψ ws , θ l hp , θ l kn , θ r hp , θ r kn , θ l sh , θ l el , θ r sh , θ r el , ϕ l hp , ϕ r hp , ϕ l sh , ϕ r sh , ψ l sh , ψ r sh T (7) where the superscripts l and r represent left and right, respectively, while the subscripts bd, ws, hp, kn, sh, and el denote body, waist, hip, knee, shoulder, and elbow, respectively The cost function to be minimized by uDEAS was designed to minimize the mean per joint position error (MPJPE) for the 3D estimated and fitted models and was calculated as the mean Euclidean distance between the 12 joint coordinates estimated by MPP, HybrIK, MH-Former, and D3DP, and those fitted by the 3D humanoid model in Figure 16, , i = l, r, j = sh, el, wr, hp, kn, an where the superscripts l and r represent left and right, respectively, while wr and an denote wrist and ankle, respectively.When the two models overlap exactly, this value is reduced to zero. Sensors 2024, 24, x FOR PEER REVIEW 13 of where L is the cost function, is the number of successful UDS itera tions following which the cost no longer decreases.Figure 15a shows an example of a session with the sequential search starting from a bi nary matrix [ ] 1 0; 01; 0 0 in the order of 1 v → 2 v → 3 v , and Figure 15b shows a session with th cost-sensitivity-based search scheme from the same binary matrix in the order of in the case of ( ) ( ) ( ) . In each session, the sensitivity values for the optimization var iables were calculated and passed to the next session to determine the search order.In this study, during the optimization process, a set of candidate joint angle variable was fed into the humanoid model, which simulated a 3D pose.The objective was to de termine the joint angle values that minimized the Euclidean distance between the coordi nates of each simulated joint and the corresponding measured joints.( ) ( ) ( ) Proposed Method In this study, we aimed to improve the accuracy of human joint angle estimation through the aforementioned data-processing steps and the application of a humanoid model using an optimization algorithm to estimate accurate joint angles.The algorithm proposed in this study comprised three major steps, as outlined in Figure 17. First, the algorithm detected the analysis region in the image data captured using a monocular camera.This involved detecting the person of interest in an RGB image using BBs.Using information from the analysis region of the previous frame, the same person could be tracked continuously and stably.Next, the 3D human joint coordinates were extracted using MPP, MHFormer, and D3DP based on the skeleton model or HybrIK based on the volumetric model.In the next step, outliers that may occur in the DL model were corrected.Here, outliers Proposed Method In this study, we aimed to improve the accuracy of human joint angle estimation through the aforementioned data-processing steps and the application of a humanoid model using an optimization algorithm to estimate accurate joint angles.The algorithm proposed in this study comprised three major steps, as outlined in Figure 17. misrecognition in occluded areas, and left or right inversion owing to changes in the lighting and clothing patterns.These were addressed through the detection of nonlinear changes in joint length in the human body.Finally, the 3D human skeletal coordinates processed through the correction procedure were reconstructed into a humanoid model using the uDEAS optimization method, which enabled the estimation of the joint angles. Experiment In this study, we evaluated the performance of the four HPE methods by conducting three experiments.First, we checked the number of outliers that occurred in real-world environment video data using the proposed outlier detection algorithm.Table 2 lists the ratio of outliers detected by the DL algorithms for the real-world environment videos.MPP produced the most frequent outliers in real-world environments, and MHFormer exhibited robust performance in joint detection, even in situations similar to real-world environments.At least one outlier was identified in all the DL algorithms.Next, we compared the computational speed of each HPE method.This study aimed to confirm the applicability of each algorithm in real-time environments.Table 3 lists the average execution times of four HPE methods measured while processing the standing rowing exercise motion.The computational speed measurements were made on the hardware presented in Table 1.Consequently, HybrIK, MHFormer, and D3DP were 7.63, 4.76, and 4.29 times slower than MPP, respectively.Thus, DL-based methods are unsuitable for application in real-time systems in their current state.Finally, we compared the joint angle measured using the Vicon measuring equipment in a laboratory environment with the joint angle calculated using the proposed algorithm.Figure 18 shows the process of analyzing a video shot from the side of a subject performing free gymnastics, similar to rowing, in a motion capture lab equipped with Vicon equipment.First, the algorithm detected the analysis region in the image data captured using a monocular camera.This involved detecting the person of interest in an RGB image using BBs.Using information from the analysis region of the previous frame, the same person could be tracked continuously and stably.Next, the 3D human joint coordinates were extracted using MPP, MHFormer, and D3DP based on the skeleton model or HybrIK based on the volumetric model.In the next step, outliers that may occur in the DL model were corrected.Here, outliers refer to the jitter in 3D human skeletal coordinates caused by errors in the DL model, misrecognition in occluded areas, and left or right inversion owing to changes in the lighting and clothing patterns.These were addressed through the detection of nonlinear changes in joint length in the human body.Finally, the 3D human skeletal coordinates processed through the correction procedure were reconstructed into a humanoid model using the uDEAS optimization method, which enabled the estimation of the joint angles. Experiment In this study, we evaluated the performance of the four HPE methods by conducting three experiments.First, we checked the number of outliers that occurred in real-world environment video data using the proposed outlier detection algorithm.Table 2 lists the ratio of outliers detected by the DL algorithms for the real-world environment videos.MPP produced the most frequent outliers in real-world environments, and MHFormer exhibited robust performance in joint detection, even in situations similar to real-world environments.At least one outlier was identified in all the DL algorithms.Next, we compared the computational speed of each HPE method.This study aimed to confirm the applicability of each algorithm in real-time environments.Table 3 lists the average execution times of four HPE methods measured while processing the standing rowing exercise motion.The computational speed measurements were made on the hardware presented in Table 1.Consequently, HybrIK, MHFormer, and D3DP were 7.63, 4.76, and 4.29 times slower than MPP, respectively.Thus, DL-based methods are unsuitable for application in real-time systems in their current state.Finally, we compared the joint angle measured using the Vicon measuring equipment in a laboratory environment with the joint angle calculated using the proposed algorithm.Figure 18 shows the process of analyzing a video shot from the side of a subject performing free gymnastics, similar to rowing, in a motion capture lab equipped with Vicon equipment.In the video, the light-blue boxes represent the BBs that detected the area of a person using the HybrIK's original code, mistakenly identifying the area from below the knees to the floor as the presence of a person.Consequently, significant errors were observed in the HPE results.This suggests that, as described in Section 3.3.1,HybrIK detects the largest recognized object, which can mis-detect nonhuman external objects. Sensors 2024, 24, x FOR PEER REVIEW 17 of 22 In the video, the light-blue boxes represent the BBs that detected the area of a person using the HybrIK's original code, mistakenly identifying the area from below the knees to the floor as the presence of a person.Consequently, significant errors were observed in the HPE results.This suggests that, as described in Section 3.3.1,HybrIK detects the largest recognized object, which can mis-detect nonhuman external objects.Figure 19 shows the results of pose recognition using HybrIK following the application of the proposed filtering algorithm to the RGB images.As shown in Figure 19, it accurately recognized the human poses in the videos.In the video, the light-blue boxes represent the BBs that detected the area of a person using the HybrIK's original code, mistakenly identifying the area from below the knees to the floor as the presence of a person.Consequently, significant errors were observed in the HPE results.This suggests that, as described in Section 3.3.1,HybrIK detects the largest recognized object, which can mis-detect nonhuman external objects.Figure 19 shows the results of pose recognition using HybrIK following the application of the proposed filtering algorithm to the RGB images.As shown in Figure 19, it accurately recognized the human poses in the videos.Figure 20 shows the body reconstruction results obtained by estimating each joint angle of the humanoid model shown in Figure 16 with the uDEAS using the 3D joint coordinate values recognized by HybrIK.These were almost identical for each pose in Figure 19.To check the generality, we applied the proposed method to two bare-handed gymnastic movements.Figure 22 shows images captured during the back and chest exercises as the second motion and the arm and leg exercises as the third motion.These movements are suitable for pose recognition and joint angle analysis because they create dynamic poses, such as rotating all joints of the arms and legs and bending or tilting the upper body.To check the generality, we applied the proposed method to two bare-handed gymnastic movements.Figure 22 shows images captured during the back and chest exercises as the second motion and the arm and leg exercises as the third motion.These movements are suitable for pose recognition and joint angle analysis because they create dynamic poses, such as rotating all joints of the arms and legs and bending or tilting the upper body.Table 4 lists the mean absolute joint angle error (MAJAE) between estimated and measured angles with Vicon system and degrees of improvement for the three gymnastics exercises using the original MPP, HybrIK, MHFormer, and D3DP and their modified versions proposed in this study.As seen in the Avg.MPJPE column, all human poses matched well with the human simulator's poses with a maximum joint deviation of less than 4 cm.Among the torso, sagittal (pitch), coronal (roll), and transversal (yaw) joint angles, the torso angles were estimated most accurately, with the average MAJAE being the smallest at 12.14% and the improvement in the average MAJAE being the highest at 35.44% in the sagi al joint angles.Table 4. Mean absolute joint angle error (MAJAE) between estimated joint angles and those measured with Vicon system and degrees of improvement for the three gymnastics motions using the original (orig.)MPP, HybrIK, MHFormer, and D3DP and their modified (mod.)versions proposed in this study.The unit of all MAJAE values is degree (Significant results are highlighted underlined).Table 4 lists the mean absolute joint angle error (MAJAE) between estimated and measured angles with Vicon system and degrees of improvement for the three gymnastics exercises using the original MPP, HybrIK, MHFormer, and D3DP and their modified versions proposed in this study.As seen in the Avg.MPJPE column, all human poses matched well with the human simulator's poses with a maximum joint deviation of less than 4 cm.Among the torso, sagittal (pitch), coronal (roll), and transversal (yaw) joint angles, the torso angles were estimated most accurately, with the average MAJAE being the smallest at 12.14% and the improvement in the average MAJAE being the highest at 35.44% in the sagittal joint angles.Table 4. Mean absolute joint angle error (MAJAE) between estimated joint angles and those measured with Vicon system and degrees of improvement for the three gymnastics motions using the original (orig.)MPP, HybrIK, MHFormer, and D3DP and their modified (mod.)versions proposed in this study.The unit of all MAJAE values is degree (Significant results are highlighted underlined).For the coronal joints, it is also encouraging that their MAJAEs were reduced by 27.68% following the application of our treatment algorithm.The transverse joint angles of the shoulder had the largest MAJAE with Vicon data and the smallest improvement effect because the hand shape must also be recognized for accurate measurement; however, we did not measure the hand shape.The overall MAJAE for the three gymnastic motions with the four 3D HPE methods was reduced by 18.99% following the application of the proposed improvement algorithm for human BB recognition and an outlier-correction scheme.We believe that this result is meaningful in that when applying 3D HPE methods to joint angle estimation for HAR, good preprocessing and postprocessing of 3D HPE data can additionally improve the joint angle estimation accuracy.Moreover, this improvement will occur regardless of the 3D HPE method. Conclusions In this study, the limitations of HPE on real-world images were identified, and a method to improve the estimation accuracy was proposed.First, four representative 3D HPE methods, MPP, HybrIK, MHFormer, and D3DP, were introduced, and real-world videos were applied to show the limitations of DL models due to their performance in dealing with the uniqueness of postures and occlusion due to the presence of obstacles, the effects of distance and angle between camera and person, and the effects of light intensity changes due to shadows. Secondly, signal-processing solutions were then proposed to detect and interpolate jitter, switching, and false-positives by utilizing link length derivatives, mean interpolation, and median filtering to improve estimation accuracy. Finally, for joint angle estimation using recognized joint coordinates, we applied a more sophisticated 3D humanoid model than the authors' previous version [20] and a fast optimization algorithm, uDEAS.To investigate the feasibility of real-time pose analysis based on joint angles, we measured the execution time of each HPE and compared the joint angle estimation results for three different motions measured by Vicon. The proposed pose correction and joint angle estimation approach yielded an overall MAJAE reduction of 18.99%.In addition, HybrIK exhibited an improvement of 61.18% after the proposed improvement algorithm was applied.However, HybrIK exhibited the slowest performance in terms of computational speed.MPP was best suited for applications in real-time environments with a computational speed of 0.0409 s per frame.However, inaccuracies in depth perception and the frequent occurrence of outliers require more attention.Additionally, D3DP and MHFormer showed relatively faster computation speeds compared to HybrIK; however, they encounter difficulties in real-time applications.Moreover, HybrIK and D3DP exhibited the highest accuracies.Although this study corrected anomalies using simple data-treatment methods, future research involving anomaly correction through behavioral analysis will enhance its applicability.Furthermore, joint-angle-based HAR is expected to identify injury risks and dysfunction through gait pattern and exercise motion analyses in the field of sports medicine. Figure 7 . Figure 7. Estimation results of MPP on videos with changing light intensity: (a) accurate estimation; (b) inaccurate estimation (square: BB indicating human recognition area). Figure 7 . Figure 7. Estimation results of MPP on videos with changing light intensity: (a) accurate estimation; (b) inaccurate estimation (square: BB indicating human recognition area). Figure 7 . Figure 7. Estimation results of MPP on videos with changing light intensity: (a) accurate estimation; (b) inaccurate estimation (square: BB indicating human recognition area). Figure 7 . Figure 7. Estimation results of MPP on videos with changing light intensity: (a) accurate estimation; (b) inaccurate estimation (square: BB indicating human recognition area). Figure 9 . Figure 9.Estimated recognition results in HybrIK compared with proposed human recognition algorithms: (a) Before processing; and (b) after processing (square: BB indicating human recognition area). Figure 9 . Figure 9.Estimated recognition results in HybrIK compared with proposed human recognition algorithms: (a) Before processing; and (b) after processing (square: BB indicating human recognition area). Figure 10 . 22 Figure 10 . Figure 10.Estimation results of MPP on videos with outliers: (a) example image with outliers; and (b) 3D MPP coordinate trajectory of the right shoulder (green square: BB indicating human recognition area). Figure 11 . Figure 11.Lengths of major links.Figure 11.Lengths of major links. Figure 12 . Figure 12.Differentiation results of the lengths of major links. Figure 12 . Figure 12.Differentiation results of the lengths of major links. Figure 13 . Figure 13.Structure of the proposed outlier detection and correction algorithm. Figure 13 . Figure 13.Structure of the proposed outlier detection and correction algorithm. Sensors 2024 , 22 Figure 14 . Figure 14.Outlier correction results for the right shoulder joint positions: (a) removing outliers; (b) average interpolation; and (c) median filter (solid line: corrected data, dashed red line: original data, red area: detected outlier area). Figure 14 . Figure 14.Outlier correction results for the right shoulder joint positions: (a) removing outliers; (b) average interpolation; and (c) median filter (solid line: corrected data, dashed red line: original data, red area: detected outlier area). v at the left or right BSS and kth iteration of the UDS, respectively, and M Figure 15 .Figure 15 .Figure 16 . Figure 15.Local search schemes of uDEAS in 3-dimensional search space: (a) sequential search; an (b) cost-sensitivity based search.(underlined number: modified row, red number: added or mod fied bit, light blue circle: initial matrix of the session, orange circle: final matrix of the session) Figure 15.Local search schemes of uDEAS in 3-dimensional search space: (a) sequential search; and (b) cost-sensitivity based search.(underlined number: modified row, red number: added or modified bit, light blue circle: initial matrix of the session, orange circle: final matrix of the session). Figure 18 . Figure 18.Images captured from a standing rowing exercise (motion 1) video, along with BBs generated by the original HybrIK for HPE (highlighted in light blue). Figure 19 . Figure 19.HPE results using HybrIK after the proposed preprocessing of five RGB images from the same video in Figure 18. Figure 20 Figure20shows the body reconstruction results obtained by estimating each joint angle of the humanoid model shown in Figure16with the uDEAS using the 3D joint coordinate values recognized by HybrIK.These were almost identical for each pose in Figure19. Figure 20 . Figure 20.Comparison of measured HybrIK poses (red line: right parts, blue line: left parts, black line: head and torso) and body reconstruction results attained by calculating each joint angle of the humanoid model with uDEAS using the 3D joint coordinate values recognized by HybrIK (lines with circles at the joint). Figure 18 . Figure 18.Images captured from a standing rowing exercise (motion 1) video, along with BBs generated by the original HybrIK for HPE (highlighted in light blue). Figure 19 Figure19shows the results of pose recognition using HybrIK following the application of the proposed filtering algorithm to the RGB images.As shown in Figure19, it accurately recognized the human poses in the videos. Figure 18 . Figure 18.Images captured from a standing rowing exercise (motion 1) video, along with BBs generated by the original HybrIK for HPE (highlighted in light blue). Figure 19 . Figure 19.HPE results using HybrIK after the proposed preprocessing of five RGB images from the same video in Figure 18. Figure 20 Figure20shows the body reconstruction results obtained by estimating each joint angle of the humanoid model shown in Figure16with the uDEAS using the 3D joint coordinate values recognized by HybrIK.These were almost identical for each pose in Figure19. Figure 20 . Figure 20.Comparison of measured HybrIK poses (red line: right parts, blue line: left parts, black line: head and torso) and body reconstruction results attained by calculating each joint angle of the humanoid model with uDEAS using the 3D joint coordinate values recognized by HybrIK (lines Figure 19 . Figure 19.HPE results using HybrIK after the proposed preprocessing of five RGB images from the same video in Figure 18. Figure 20 Figure20shows the body reconstruction results obtained by estimating each joint angle of the humanoid model shown in Figure16with the uDEAS using the 3D joint coordinate values recognized by HybrIK.These were almost identical for each pose in Figure19. Figure 19 . Figure 19.HPE results using HybrIK after the proposed preprocessing of five RGB images from the same video in Figure 18. Figure 20 . Figure 20.Comparison of measured HybrIK poses (red line: right parts, blue line: left parts, black line: head and torso) and body reconstruction results attained by calculating each joint angle of the humanoid model with uDEAS using the 3D joint coordinate values recognized by HybrIK (lines with circles at the joint). Figure 21 Figure 21 presents a comparison of the joint angle profiles attained by uDEAS using the joint coordinates estimated by the treated MPP, HybrIK, MHFormer, and D3DP during the standing rowing action.The estimation results for the joint angle profiles of the torso's sagittal or coronal joint angle, left or right sagittal joint angles of the shoulder or elbow, sagittal Figure 20 . Figure 20.Comparison of measured HybrIK poses (red line: right parts, blue line: left parts, black line: head and torso) and body reconstruction results attained by calculating each joint angle of the humanoid model with uDEAS using the 3D joint coordinate values recognized by HybrIK (lines with circles at the joint). Figure 21 Figure 21 presents a comparison of the joint angle profiles attained by uDEAS using the joint coordinates estimated by the treated MPP, HybrIK, MHFormer, and D3DP during the standing rowing action.The estimation results for the joint angle profiles of the torso's sagittal or coronal joint angle, left or right sagittal joint angles of the shoulder or elbow, sagittal right knee joint, and coronal right hip joint exhibited shapes similar to the results obtained using the Vicon system.Similar patterns of angle profiles were observed in the other joints, albeit with certain differences in the offsets.Sensors 2024, 24, x FOR PEER REVIEW 18 of 22 Figure 22 . Figure 22.Images captured from (a) the back and chest exercise (motion 2) and (b) the arm and leg exercise (motion 3) videos generated by HybrIK. Figure 22 . Figure 22.Images captured from (a) the back and chest exercise (motion 2) and (b) the arm and leg exercise (motion 3) videos generated by HybrIK. Table 1 . System device specifications and implementation environment. Table 1 . System device specifications and implementation environment. Table 1 . System device specifications and implementation environment. Table 2 . Ratio of outliers such as misrecognition in occluded areas and left or right inversion by video file. Table 3 . Average execution times for the HPE methods using the video file in seconds per frame. Table 2 . Ratio of outliers such as misrecognition in occluded areas and left or right inversion by video file. Table 3 . Average execution times for the HPE methods using the video file in seconds per frame.
15,232
2024-06-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Determinants of Engagement in Off-Farm Employment in the Sanjiangyuan Region of the Tibetan Plateau The Sanjiangyuan region is a typical ecologically vulnerable region. Although environmental initiatives in the region have had positive results, criticism has arisen that one of these, the ecological migration policy, did not achieve the desired results regarding the transition to off-farm employment and livelihoods. This study examined key factors influencing the engagement of pastoralists in the Sanjiangyuan region in off-farm employment. Binary logit and probit models were adopted along with in-depth household surveys in the Sanjiangyuan region to support the quantitative and qualitative analyses. The results indicate that off-farm employment in the region is generally not significant (18.13% of the investigated households had members working in off-farm sectors), and that education and government subsidies have had significantly positive effects on engagement in off-farm employment, while the number of livestock and the distance between house and town have had significantly negative effects. These results suggest that it is necessary to establish more financial support for off-farm employment and livelihood transitions, in addition to strengthening ecological compensation. Promising approaches could include offering different types of skill training and increasing employment opportunities in off-farm industries. Introduction The Sanjiangyuan region is located in the heart of the Tibetan Plateau. It is the source area of the Yangtze, Yellow, and Mekong Rivers and is known as the ''water tower'' of China (Shao et al 2013;McGregor 2016). The region comprises 363,000 km 2 and accounts for 50.4% of the total area of Qinghai Province (Du 2012). With its distinctive ecosystem and biodiversity, Sanjiangyuan plays an important role in maintaining the hydrological and ecological security of China and several Southeast Asian countries (Wang et al 2010). However, it is also a vulnerable area influenced by intensifying climate change and human activity (Zhao, Wu, et al 2011;Li, Gao, et al 2013). Over the past 3 decades, mountain glaciers have been shrinking in and around Sanjiangyuan, directly affecting water supplies to plateau lakes and rivers (Kang et al 2010;Sun et al 2012). Meanwhile, with increased population and human activity, environmental degradation has accelerated; problems have included grassland degradation and desertification, wetland ecosystem deterioration, and weakened water conservation (Du et al 2004;Zeng and Feng 2007;Liu et al 2008;Harris 2010;Yi et al 2014). The primary forms of degradation perceived by local residents include increases in rodent burrows; bare areas; increases in weeds, including poisonous weeds; and desertification of meadows . Degradation of Sanjiangyuan's ecosystem threatens not only the ecological security of the river basins, but also the livelihoods of local herders and farmers as well as regional socioeconomic development . To adapt to grassland degradation, adjust resource management, and pursue sustainable pastoralism, some bottom-up strategies have been adopted in pastoral societies. Foggin and Torrance-Foggin (2011) noted that social services can enable partnerships between local herders and conservation authorities to maintain social stability. Foggin (2011) also noted that collaborative management-such as community comanagement and contract conservation in villages in Yushu Tibetan Autonomous Prefecture-can be seen as a new approach to resource management and conservation. Yan et al (2010) noted that farmers and herders diversified their livelihoods to adapt to grassland degradation, while, compared to farmers at low elevations, herders found it hard to seek off-farm employment. Various measures have been implemented by state and local governments to protect the environment, improve residents' livelihoods, and promote economic development. In 2005, The Overall Planning for Qinghai Sanjiangyuan Nature Reserve Ecological Protection and Construction (Qinghai Sanjiangyuan Ziran Baohuqu Shengtai Baohu he Jianshe Zongti Guihua), a document approved by the State Council, was published (Lu 2007). Under this document, herders were compensated for migrating out of severely degraded areas to new settlements (Li, Luo, et al 2013;Qi et al 2014). This mechanism is referred to as ''ecological compensation,'' aiming to realize ecological restoration by economic means, and the associated projects are referred to as ''the ecological migration project,'' a policy initiated by the Chinese government (Chang et al 2014;Du 2012). In addition to compensation for the herders who left, subsidies were provided to the herders who remained in the original areas, including subsidies for not grazing in off-limits areas and for buying forage grass seed and means of production. The government also established poverty-alleviation policies to increase employment opportunities Qi et al 2014), such as skill training, the development of a local economy, and self-employment subsidies. Although the government projects were based on the expectation that herders would increasingly engage in businesses and services when they moved to urban areas, only a small number of migrants undertook off-farm activities, such as knitting blankets for sale, operating small businesses, or working as security guards, taxi drivers, or construction workers (Foggin 2008;Du 2012). According to our observations during fieldwork, most people in the settlements were unemployed, relying on financial subsidies from the government. However, these subsidies were insufficient for meeting daily expenses and did not keep pace with inflation. This can have major social consequences for cities and towns, for example, by increasing their poverty rates (Ptackova 2011). Studies have suggested that the logic, benefits, and costs of ecological migration need careful reexamination in the grassland areas of Sanjiangyuan, where Tibetans have sustained their livelihoods for hundreds of years (Foggin 2008(Foggin , 2011Yeh 2010). Furthermore, the ecological migration project weakened the traditional nomadic culture, respect for natural resources, and close connections between religions (Tashi and Foggin 2012;Qi 2015), which were not widely accepted by herders. In fact, some herders continue herding on banned grasslands or place weak or sick livestock on them (Du 2012;Bessho 2015). In addition, there remains the problem of the postresettlement livelihoods of pastoralists participating in government resettlement projects, where traditional livelihoods have been replaced by a complex web of dependency on the state (Nyima 2014). Seeking off-farm employment is the main livelihood strategy for poor people in developing countries (Dorward et al 2009). However, case studies in Sanjiangyuan have shown that herders find this difficult, either on their own or with the help of the ecological migration projects. Further quantitative studies are needed on the factors restricting herders from seeking off-farm employment in Sanjiangyuan. Such research can provide evidence for reasonable livelihood-improvement policies, both for ecological migrants and for the remaining pastoralists. This study conducted household surveys in 2014 in 4 parts of Sanjiangyuan and analyzed, using binary logit and probit models, the critical factors influencing herders seeking off-farm livelihoods. The results are reported here, followed by recommendations on ways to support pastoralists seeking off-farm livelihoods in Sanjiangyuan. Study area The study area is in southeastern Sanjiangyuan and includes Jimai, Jianshe, and Wosai townships in Darlag County and Baiyu township in Jiuzhi County in the Southeast Golok Tibetan Autonomous Prefecture, Qinghai Province ( Figure 1). In earlier studies (Yan et al 2010;Yan, Yu, et al 2011), we found relatively severe grassland degradation in Wosai and Jianshe townships; this study enlarged the scope to examine a broader area. Moderate-Resolution Imaging Spectroradiometer (MODIS) normalized difference vegetation index (NDVI) remote-sensing data on grassland vegetation indicate that most of these townships face pasture degradation, which challenges the livelihoods of local herders (Yu et al 2010). Ecological migration projects have been implemented in the 2 counties, and most of the migrants have already been resettled. This study focused mainly on herders who did not migrate. Darlag County runs about 162 km east to west and 126 km south to north, and it is located in the plateau region of the Bayan Har Mountains, where the terrain is higher in the northwest and lower in the southeast. The average elevation is more than 4200 m, and it has a typical alpine frigid semihumid climate with cold and warm seasons. The cold season usually lasts 7 to 8 months, with many snowstorms and windstorms. The average temperature is 0.58C, and the average precipitation is 595 mm, with an average evaporation of 1205.9 mm. The soil type is alpine meadow soil, and the vegetation is composed of alpine meadow, swamp meadow, and shrub meadow. Grassland degradation is serious in Darlag County; there was 7820 km 2 of deteriorated grassland in 2014, accounting for 69.97% of utilizable areas (Sun 1991;Bai et al 2012;Zhu et al 2014). Animal husbandry is the main industry; in 2014, the total number of livestock was 197,000, including yak, sheep, and horses. At the end of 2014, the gross domestic product (GDP) of Darlag County was 270 million yuan renminbi (CNY; 1 $US ¼ 6.1428 CNY in 2014), of which animal husbandry contributed 97.5 million (Darlag County Government 2014). Jiuzhi County shares a border with Darlag County. It has a land area of 8,757.25 km 2 . The Nianbaoyuze mountain range runs across the territory, with a northsouth valley to the south of the mountain range and wide valleys and intermontane basins to the north. The elevation varies from 3568 to 5369 m, with a typical plateau continental climate with 2 seasons, cold and warm. The average annual temperature is 0.58C, and the temperature is lower than 08C up to 184 days of the year, including 131 days with temperatures lower than À108C. Annual sunshine totals 2084.5 to 2509.5 hours, which is the least in Qinghai Province. It rains an average of 171 days per year, and the annual precipitation is 764.4 mmthe highest in Qinghai Province. The main soil type is alpine meadow soil, and the vegetation type is alpine meadow. The area of grassland degradation in 2014 was 25.57% in Jiuzhi County as a whole, but it was 59.27% in Baiyu township, the most seriously degraded area. Animal husbandry is the main industry in this county; in 2014, the total number of livestock was 245,100, including yak, sheep, and horses. At the end of 2014, the GDP of Jiuzhi County was 294 million CNY, of which the animal husbandry contributed 136 million (Jiuzhi County . Livestock and grasslands are the most important livelihood assets for local people. Residents also collect Cordyceps sinensis (dong chong xia cao) for additional income (Yan et al 2010) and receive subsidies through the ecological compensation mechanisms described earlier. Data collection In-depth fieldwork was conducted in August 2014 and included presurveys, formal household surveys, and group discussions. First, we randomly chose 10 households in Darlag County for presurveys and revised the questionnaire based on those results. Then, we used random sampling to select households for surveys conducted through semistructured interviews, and we held informal discussions with people who were familiar with the rural transformation of the sampled townships. The head of the surveyed household was the main respondent, and other household members also supplied information. Two local Tibetan speakers were employed as translators. We discussed the household questionnaires with the translators and determined how to formulate each question. Table 1 lists the main content of the questionnaire. Since the herders stayed in summer pastures where the tents were very scattered, investigators drove by car to the general location and then walked to each tent and explained the purpose of the survey. The interview started when the household agreed to answer the questionnaire. Surveys were conducted in tents or houses and lasted 1 to 2 hours. Villages were randomly selected from each township, and approximately 40 households were chosen from each township due to the dispersed nature of the settlements. None of the sample households had taken part in an ecological migration project. In total, questionnaires were administered to 173 households. Of these, 13 did not have a labor force (labor force here means those who have varying degrees of labor capacity); thus, 160 valid responses were analyzed in this study (39 in Wosai township, 40 in Jianshe township, 43 in Jimai township, and 38 in Baiyu township). Of these, 39 households practiced household-based grazing management, and 121 practiced group/community-based grazing management, averaging nearly 8 households per grazing group. In addition, 3 workshops were organized to discuss the factors influencing engagement in off-farm employment; participants included 6 township officials, 20 village representatives, and 5 rural teachers. After summarizing and identifying the comments from the informants, we drew general conclusions. Classification of sample households We divided the sampled households into 2 groups based on whether any household members engaged in off-farm employment. Off-farm employment included selfemployment (eg running a store or business, carpet making, or driving a taxi) and wage/salary employment (eg as a teacher, security guard, sanitation worker, or waiter). Collecting C. sinensis was not considered off-farm employment, though operating a C. sinensis-related business was. Econometric model A binary discrete choice model was used in this study to statistically analyze the behavior of individuals and households. Generally, 2 types of binary choice models are used according to the different probability distribution of random errors: The probit model is used when the random error follows a standard normal distribution, and the logit model is used when the random error follows a logistic distribution. Since the distribution of the random error was unknown in this study, we adopted the binary logit model for the regression analyses of the data and the probit model to show the stability of the results. Many studies have identified factors influencing the livelihood strategies of farmers and herders at the household level. The most frequently studied factors include a variety of livelihood assets, which are influenced by decisions of household resource allocation, and macrosocial and economic factors such as natural disasters, climate change, market opportunities, and policy ( Topic Questions Demographics Education, labor capacity, health condition, and skill mastery of household members Livelihoods Employment, reasons for and barriers to working in nonfarm sectors (eg part-time work in the manufacturing or service sector) Material property Use of the household's material possessions for agricultural and nonagricultural purposes Education and development Attitude toward children's education, economic level of household, methods to improve the quality of life, hope to get help when facing natural hazards Income and expenditures Household income and expenditures in last year Grassland use Grassland degradation, maintenance (including fencing, seeding, and rodent control), and rent Livestock production Grazing methods, number of livestock, disasters in the past 10 years, response to disasters Other Collection of Cordyceps sinensis, condition of house Khatun and Roy 2012;Kuwornu et al 2014). Table 2 lists the factors that are widely covered in the literature. These can be categorized as human, financial, natural, social, or physical. Obviously, the quantity and quality of human assets determine whether farmers use other assets to pursue different livelihood strategies. Our model aimed to quantitatively analyze the factors that influence engagement in off-farm employment in Sanjiangyuan. From the factors described in Table 2, 9 were selected to serve as the independent variables for this analysis (Table 3). These factors were tested for multicollinearity to ensure the stability of the model and the reliability of the output. For this purpose, Pearson correlation analysis, tolerance, variance inflation factor, eigenvalue, and condition index were used to test the relationships between independent variables. Pearson correlation analysis showed that the absolute maximum value of the correlation coefficient between the variables was 0.312 (the index for the skilled labor ratio and car ownership), with all less than 0.8. Tolerance was greater than 0.1, variance inflation factor was less than 2, eigenvalue was not equal to 0, and the condition index was less than 30. These test results indicated no multicollinearity between the independent variables. Characteristics of sample households In total, 29 households had members engaged in off-farm employment, accounting for 18.13% of the total sample (Table 4). The average size of the labor force in these households was higher than that for other households. The income of households with off-farm work (40,866.89 CNY per household) was much higher than that for households without it (23,181.61 CNY per household). Households with off-farm work depended less on animal husbandry than households without it, owning fewer livestock. Those with off-farm work owned, on average, 94.55 sheep units per household, while those without offfarm work owned 102.91 sheep units (livestock units were calculated as 1 yak ¼ 4 sheep and 1 horse ¼ 5 sheep). Grasslands were significantly degraded for all households, but somewhat less so for households with off-farm employment. Factors Examples Sources that discuss the factors a) Factors influencing engagement in off-farm employment The logit and probit model results were consistent and stable (Table 5). The specific results of the regression analysis are described below. Perceived grassland quality had no significant effect on whether a household engaged in off-farm employment, which differed from our assumptions. A possible reason is that grassland quality was evaluated by herders themselves and thus lacked a unified standard. In addition, the effect of grassland degradation on engagement in off-farm employment may differ depending on the size and type of grassland. The number of livestock had a negative influence on engagement in off-farm employment, consistent with our expectation. However, Sanjiangyuan has different offfarm employment mechanisms than other pasture areas in China. In other areas, the grazing prohibition and ecological compensation policies have encouraged reductions in livestock, and the animal husbandry labor force has been free to engage in off-farm employment due to the reduced dependency on livestock (Ji et al 2007;Tian 2011;Hou et al 2012;Meng et al 2013). Moreover, off-farm employment is relatively easy to find in other pastoral areas. Distance between house and town had a negative effect on engagement in off-farm employment; that is, the greater the distance, the lower was the likelihood of engagement in off-farm employment. Car ownership had no significant influence. This may be because the households in the study area often chose to buy cheap secondhand cars with a low access threshold, and there was no obvious difference between households in this regard. Amount of income from subsidies had a positive effect on household engagement in off-farm employment. Subsidy income is often used to invest in a new business and create a sustainable livelihood. In households with off-farm employment, 76 people were engaged in off-farm activities, including 32 operating a business (eg livestock and C. sinensis businesses, transport services, and stores), 16 part-time workers, and 28 people working for Total loans from relatives and banks (CNY) 4.31 5.02 þ a) Expected effect on decision to pursue off-farm employment. b) Family grassland area was not chosen to reflect the status of natural assets, because we found in the fieldwork that most of the nomads have a fuzzy memory of grassland area. Therefore, we chose ''perceived grassland quality'' as an alternative. c) Values are presented as logarithms. d) CNY (1 $US = 6.1428 CNY in 2014). governments or corporations (eg village representatives, forest rangers, and epidemic prevention coordinators). Education had a significant positive effect on engagement in off-farm employment. The ratio of skilled labor (with skills such as driving, agricultural technology, and sewing to diversify their livelihoods) to overall household labor capacity did not have a significant effect, even though the coefficients were positive. Skill training in the area is low, and training projects are often very simple. In the households participating in the study, only 28.49% of the workforce had received any skill training, of which 82.08% consisted only of driver training. Thus, skill training was insufficient to promote off-farm employment. Though access to loans did not have significant influence, its importance should not be ignored. Illness (mainly pulmonary tuberculosis) was a major reason that households took out loans. This is also a common livelihood risk in the study area and often leads to laborforce reduction. Nomads are susceptible to disease because of the harsh climate, and infectious diseases spread easily in their small tents. We discussed the tuberculosis issue with local doctors, who noted that nomads do not always know when they are infected because they do not have annual examinations. When the disease becomes severe, nomads have to go to large hospitals for treatment and have to borrow money from relatives to pay for it. Factors influencing engagement in off-farm employment The main factors influencing engagement in off-farm employment include number of livestock, distance between house and town, subsidy income, and education. The finding on the number of livestock is consistent with that of a study in Uxin Banner in the Inner Mongolia Autonomous Region, where the Grain for Green policy and reduction in the number of livestock promoted the Junior high school 10% 5% Middle and vocational school 1% 1% University and above 3% 1% Skilled labor ratio c) 27.40% 25.42% a) Source: Authors' survey in 2014. b) Total household labor force was calculated as described in Table 3. c) Percentage of household labor force that can be considered skilled labor. transition to off-farm livelihoods, as well as transition from traditional (nomadic and seminomadic) animal husbandry to intensified agricultural management (ie to contract farming in animal husbandry) (Meng et al 2013). Studies in the Gannan areas in the Eastern Tibet Plateau and in Yanchi County, Ningxia Province, have yielded similar findings (Ji et al 2007;Tian 2011;Hou et al 2012). However, the grassland degradation of other pastoral areas is not as serious as in Sanjiangyuan. Our finding on the effect of distance between house and town was consistent with our expectations; it is also consistent with a study of pastoral societies in southern Ethiopia (Eneyew 2012), where shorter distances to the market town resulted in lower costs to engage in nonfarm employment. Our finding on subsidy income was also consistent with our expectations. Ecological compensation was the source of 86.41% of subsidy income. Since financial assets are usually a key factor in livelihood diversification (Wassie et al 2007;Nega et al 2009;Khatun and Roy 2012), ecological compensation provides capital for herders to seek offfarm employment. Our finding on education was consistent with findings in the upper reaches of the Dadu River and Gannan areas (Yan et al 2010;Zhao, Li, et al 2011), which showed the positive effect of basic education on engagement in off-farm employment. The ratio of skilled labor to household labor capacity did not have a statistically significant effect on engagement in off-farm employment in this study. A possible explanation for this can be found in a study in Zeku County, Sanjiangyuan (Qi et al 2014), which found that the most requested training programs were for driving, the Mandarin language, and Internet use; other studies have found that the training provided by the government cannot meet all the training needs of herders (Sun 1991;Lu and Zhao 2009;Tian 2011). Policy implications Empirical studies of resettlement areas in Sanjiangyuan have found that most ecological migrants are families with few or no livestock, giving rise to a pessimistic view of ecological migrants' livelihoods and sharp criticism of government projects. However, our study of herders who still graze livestock indicates that the livelihood situation is not so pessimistic, though there are some factors that prevent pastoralists from shifting to off-farm employment. A reasonable explanation is that ecological migrants are among the most vulnerable herders and lack the assets needed to seek off-farm employment. For those vulnerable groups, a feasible approach would be to improve subsidy standards to cover living necessities. Since the herders in this case study sought off-farm employment, we recommend that the government should adjust its current policies from livelihood maintenance to livelihood promotion in the second phase of Qinghai Sanjiangyuan Nature Reserve Ecological Protection and Construction. First, more effective methods of skills training are necessary to meet herders' needs. Second, to ensure the sustainable development of the regional environment and economy, regional resources and cultural characteristics should be fully utilized to develop plateau tourism and manufacture of distinctive local products, which could increase employment opportunities for local people. Finally, as the ecological compensation policy has already been mentioned, it is important to guide ecological migrant households to return grazing land to grassland and reduce livestock numbers, and to engage in the daily management of natural forests, riverbeds, abandoned land, wildlife, and wetlands in the ecological function conservation areas. Conclusion Understanding the drivers of and barriers to engagement in off-farm employment in Sanjiangyuan is crucially important for policymaking regarding livelihood improvements. Based on 2014 survey data for 4 townships in Sanjiangyuan, we used binary logit and probit models to quantitatively analyze the key factors affecting herders' decisions to seek off-farm employment. We aimed to find an effective way to motivate pastoralists to engage in nonagricultural occupations and make policy suggestions for improving post-resettlement livelihoods. The results indicate that education helps to expand the range of employment opportunities, and government subsidies provide capital, both of which promote off-farm employment. Animal husbandry has a stronger demand for labor and thus hinders off-farm employment. In addition, willingness to pursue off-farm employment generally declines with increased distance between the household and the nearest town.
5,714.8
2017-12-20T00:00:00.000
[ "Environmental Science", "Economics", "Sociology" ]
Feebly Interacting $U(1)_{\rm B-L}$ Gauge Boson Warm Dark Matter and XENON1T Anomaly The recent observation of an excess in the electronic recoil data by the XENON1T detector has drawn many attentions as a potential hint for an extension of the Standard Model (SM). Absorption of a vector boson with the mass of $m_{A'}\!\in\!(2\,{\rm keV},\!3\,{\rm keV})$ is one of the feasible explanations to the excess. In the case where the vector boson explains the dark matter (DM) population today, it is highly probable that the vector boson belongs to a class of the warm dark matter (WDM) due to its suspected mass regime. In such a scenario, providing a good fit for the excess, the kinetic mixing $\kappa\!\sim\!10^{-15}$ asks for a non-thermal origin of the vector DM. In this letter, we consider a scenario where the gauge boson is nothing but the $U(1)_{\rm B-L}$ gauge boson and its non-thermal origin is attributed to the decay of the scalar talking to the SM sector via a portal with the SM Higgs boson. We discuss implications for the dark sector interactions that the vector DM offers when it serves as a resolution to both the small scale problems that $\Lambda$CDM model encounters and the XENON1T anomaly. I. INTRODUCTION Recently, the XENON1T collaboration reported an excess in the electronic recoil data for the energy regime ranging from 1 keV to 7 keV [1]. Especially, the prominence of the excess for m A ∈ (2 keV,3 keV) aroused many interesting interpretations based on various extensions of the SM. Absorption of a vector boson is one of the plausible possibilities for the excess in which case its suspected mass and kinetic mixing read m A ∈ (2 keV,3 keV) and κ ∼ 10 −15 respectively [2][3][4][5]. Provided this vector boson serves as a dominant component of DM today, its suspected mass regime could be of interest in regard to the small scale problems (e.g. core/cusp problem [6], missing satellite problem [7,8], too-big-to-fail problem [9]); keV scale WDM can alleviate some of the small scale problems if the free-streaming length travelled by the WDM amounts to O(0.1) Mpc [10]. Note that, however, the dark photon (A µ ) mass m A ∈ (2 keV,3 keV) is actually outside of the allowed thermal WDM mass regimes inferred from Lyman-α forest observation m thermal wdm > 5.3 keV [11] and redshifted 21cm signals in EDGES observations m thermal wdm > 6.1 keV [12,13]. Therefore in order for the dark photon to be a candidate for WDM to address the small scale problems, it should be a non-thermally originated one. Very interestingly, this line of reasoning to have a non-thermal dark photon DM (DPDM) gathers momentum when it is taken into account that a fit of good quality for the excess can be accomplished for the kinetic mixing κ ∼ 10 −15 [1][2][3][4]. Being the most direct and dangerous coupling to thermalize A µ with the SM thermal bath, the observed kinetic mixing κ ∼ 10 −15 ensures that A µ can avoid to join the SM thermal bath unless there is another significant indirect coupling to the SM sector. Now there arises an interesting question: what could be a non-thermal production mechanism that allows the mass regime m A ∈ (2 keV,3 keV) consistent with the aforementioned constraints on m thermal wdm ? As we shall discuss, an answer to this question concerns momentum space distribution of A µ which decides the map from m thermal wdm to m A . Different non-thermal production mechanisms will be characterized by different momentum spaces for A µ . Namely, given the same constraint on m thermal wdm , this implies that we would have different lower bounds for m A , depending on what kind of non-thermal production mechanism produces A µ . In this letter, we study the scenario where the keVscale DPDM with the suppressed kinetic mixing arises with a non-thermal origin. Especially for m A ∈ (2 keV,3 keV), the DPDM can induce the XENON1T anomaly through absorption analogous to the photoelectric effect. We identify A µ with the massive gauge boson of the broken U (1) B−L gauge theory which is the most well-motivated minimal extension of the SM. 1 When incorporated with a scalar and three heavy right handed neutrinos, U (1) B−L gauge theory provides us with the most elegant explanation for the origin of the smallness of the active neutrino masses based on the seesaw mechanism [16][17][18] and the leptogenesis [19,20]. In spite of this concreteness of the model, our mechanism and result can be easily generalized to a hidden broken U (1) gauge theory, emphasizing its usefulness in the study of the DPDM. To enable the observed mass regime m A ∈ (2 keV,3 keV) to be consistent with the given constraint on m thermal wdm , we consider the production mechanism where the non-relativistic scalar charged under U (1) B−L produces A µ via its decay after U (1) B−L gets spontaneously broken. 2 The parent scalar particle is assumed to be produced non-thermally from the SM thermal bath. We show the mapping between m thermal wdm and m A explicitly, corroborating that the production mechanism of our interest indeed provides a converted experimental constraint on m A making mass regime m A ∈ (2 keV,3 keV) survive. By computing the freestreaming length based on the thermal history of A µ in the model, we further show presence of the parameter space in which A µ becomes the WDM candidate resolving some of the small scale problems. II. MODEL We consider the broken U (1) B−L gauge theory of which the massive gauge boson A µ is taken to be the DM candidate in the model. On top of the SM particle contents, we introduce a scalar Φ (-2) and three right-handed neutrino Weyl fields N i=1,2,3 (+1) where the numbers in the parenthesis denote U (1) B−L charges assigned to each field. These additional fields form the following Yukawa coupling by which the right handed neutrinos acquire masses when the condensation of Φ induces the spontaneous breaking of U (1) B−L . Stemming from the large vacuum expectation value (VEV) of Φ, the heaviness of these right handed neutrino fields can explain the tiny active neutrino masses via the seesaw mechanism [16][17][18]. Moreover, the out-of-equilibrium decay of the heavy righthanded neutrinos creates the lepton asymmetry of the universe which is converted into the baryon asymmetry later with the help of the sphaleron transition [19]. For the consistency of the model shown to be later, we consider the reheating temperature T RH 10 14 GeV. We assume that two of mass eigenvalues of N are comparable to <Φ>≡ V B−L / √ 2 while the mass of the remaining one lies between T RH and V B−L (T RH < V B−L ). Then at the reheating era, the lightest N serves as the source of the SM thermal bath via its decay into leptons and the SM Higgs (non-thermal leptogenesis) [20,34]. Since A µ in the model is identified with the hypothetical vector boson triggering the XENON1T excess, we set Since we assume V B−L 10 14 GeV for the leptogenesis, we further set g B−L 10 −20 . On the other hand, the scalar sector of the model is described by the following potential where H denotes the SM SU(2) Higgs doublet and V (H) is the Higgs potential in the SM. We see that there appear two new dimensionless couplings λ and λ HΦ as compared to the SM scalar sector. In the coming next sections, we study how these new couplings enable A µ to play a role of the non-thermally produced WDM which can resolve the small scale problems and explain the XENON1T anomaly. III. PRODUCTION OF THE DARK PHOTON At the reheating era with T RH 10 12 GeV, the scattering process H + H * → φ + φ * based on the Higgs portal interaction in Eq. (2) can produce the radial mode of Φ non-thermally for a sufficiently small λ HΦ . 3 Here We shall take φ as the parent particle of which decay produces a pair of A µ when it becomes non-relativistic. For our purpose of having A µ as a non-thermal WDM candidate, we figured out that the model is consistent with a large enough λ which enables φ to form the dark thermal bath. The tiny gauge coupling and the small Higgs portal coupling (λ HΦ ) keeps the purity of the dark thermal bath until φ becomes non-relativistic so that the dark thermal bath is made up of φ only. φ avoids to be thermalized by the SM plasma since it is produced from the scattering process H + H * → φ + φ * if the associated interaction rate continues to be smaller than the Hubble expansion rate, i.e. Γ < H until the SM Higgs particles are integrated out. As we shall see in the discussion of the DM relic density, we found that for the reheating temperature we consider in the model, λ HΦ is sufficiently small not to thermalize φ until T SM 100GeV is reached. Below T SM 100GeV, both the SM Higgs and φ do not exist and thus the scattering process H + H * → φ + φ * cannot take place physically. As soon as φs are produced via the Higgs portal at the reheating era, due to the self-quartic interaction (λ) shown in Eq. (2), the dark thermal bath forms via the scattering process φ + φ → φ + φ if the associated interaction rate wins against the Hubble expansion rate, where ξ ≡ T DS /T SM is used. Eq. (3) gives us the maximum allowed reheating temperature T SM,max (a RH ) λ 2 ξM P below which Γ 2φ→2φ > H always holds. As will be shown later, the correct DM relic abundance matching gives us ξ 0.5. Apart from Eq. (3), when the dark thermal bath formation requires φ to be relativistic at the reheating era. Thus, we take the following hierarchy as the condition for formation of the dark thermal bath where T DS is the dark thermal bath temperature. The parameter space meeting the condition in Eq. (4) is shown as the area below the green dotted line in Fig. 1. Thereafter, φ in the dark thermal bath continues to experience redshifting to become the non-relativistic particle when m Φ T DS is reached. 4 Thereafter, when the time for the φ's decay rate Γ φ→A +A to be comparable to the Hubble expansion rate is reached, the non-relativistic φ starts to decay to a pair of A µ s. Since then, A µ begins free-streaming as the WDM. By equating the decay rate Γ φ→A +A to the Hubble expansion rate during radiation dominated era, we obtain the temperature at which the free-streaming of A µ begins where we used m Φ = √ 2λV B−L , Q Φ is the U (1) B−L charge of Φ, and a FS is the scale factor for the onset of the free-streaming of A µ . For the second line in Eq. (5), we used Q Φ = −2. To make it sure that the time corresponding to a FS is preceded by the time when φ becomes nonrelativistic, we impose the condition m φ > ξT SM (a FS ), which results in where we referred to Eq. (5) for T SM (a FS ). This condition is satisfied by the region above the red dotted line in Fig. 1. 4 Note that A µ cannot join the dark thermal bath by scattering among φs due to the small g B−L . This allows φ to decay after it becomes non-relativistic, which is the key aspect that makes A µ 's mass regime inferred from the XENON1T survive against the Lyman-α forest constraint on the WDM mass. The similar nonthermal WDM production mechanism was also used in Refs. [3,37,38]. In this section, we studied the constraints on parameters in Eq. (2) and T SM (a RH ) ≡ T RH obtained by requiring (a) non-thermal production of φ, (b) thermal bath formation by φ self-interaction, (c) absence of thermalization of φ prior to its decay and (d) non-relativistic φ's decay to A µ s. These constraints would be further refined in the next section by several additional cosmological conditions that A µ should meet as the WDM candidate. As a final note before concluding this section, we notice that the decay rate for the main decay mode of φ given in Eq. (5) does not depend on the gauge coupling g B−L thanks to the conversion of the form of the decay rate based on m A = 2g B−L V B−L . This facilitates the early production of A µ from the decay of φ even if the model is featured by the tiny gauge coupling. IV. DARK PHOTON AS THE WDM Here we study physical quantities that characterize the dark photon A µ as the WDM. These quantities include the relic density, ∆N eff contributed by A µ , the freestreaming length λ FS of A µ and the current constraint on the mass of A µ based on the Lyman-α forest observation. A. Relic density We assume that A µ explains the whole of DM population today. The fraction of the energy density of the universe today attributed to the DM reads where the comoving number density of DM, Y DM ≡ n DM /s, is the conserved quantity since the production of the DM. From the values of s 0 = 2.21 × 10 −11 eV 3 and ρ cr,0 = 8.02764 × 10 −11 × h 2 eV 4 , one obtains where h is defined to be H 0 = 100hkm/sec/Mpc. On the other hand, having the decay process φ → A µ + A µ as the origin of DM in the model, regarding number densities of particles, we have the relation n A = 2n φ . Given the non-thermally produced φ from the SM Higgs scattering, the comoving number density of DM discussed in Eq. (8) can be written as [36] Thus, using h = 0.68, equating Eq. (8) and Eq. (9) gives λ HΦ 8.86 × 10 −7 × T RH 10 9 GeV Note that for a given T RH 10 12 GeV, we see that λ HΦ in Eq. (10) is still small enough to ensure that Γ H+H * →φ+φ * < H holds until H becomes not energetic enough to produce φs via the scattering and φ disappears in the dark sector via its decay. Using the forms of n φ and s SM as functions of T DS and T SM , we can write Y DM given in Eq. (9) as the function of ξ. By equating the result to Y DM inferred from Eq. (8), we obtain ξ 0.5 for the mass regime of m A ∈ (2 keV,3 keV). B. ∆N eff contributed by the dark photon As the keV-scale DM candidate, A µ is the relativistic particle during the BBN era, behaving as the radiation. Therefore, its energy density contributes to the background expansion during the radiation dominated era, which is parametrized by its contribution to the extra effective number of neutrinos, ∆N BBN eff . The amount of energy density of A µ at BBN time is given by where Y DM is given in Eq. C. Free-streaming length of the dark photon As the WDM candidate, A µ can alleviate the small scale problems provided its free-streaming length amounts to λ FS O(0.1)Mpc [10]. 6 Thus, we focus on the parameter space which can produce λ FS 5 By referring to Eq. (5), we obtain a FS 6 In Ref. [10], WDM is characterized by 0.01Mpc λ FS 0.1Mpc. O(0.1)Mpc to make A µ DPDM WDM candidate addressing the small scale problems. The free-streaming length is computed by where F (a) ≡ Ω rad,0 + aΩ m,0 + a 4 Ω Λ,0 , a FS is given in Eq. (12) and <p DM (a FS )> m Φ /2 in the instantaneous decay limit. Since a FS is a function of (λ, m Φ ), so is λ FS in our model. D. A µ mass constraint from the Lyman-α forest From the observation that WDMs of different origins have similar effects on the linear matter power spectrum if their velocity variances (β) are close to each other [40], 7 one can map the mass constraint on the thermal WDM from Lyman-α forest observation to a mass constraint on a non-thermally originated WDM [41]. The velocity variance is defined to be where f (q) is a momentum space distribution for the DM of interest. In our model, A µ is produced from the nonrelativistic particle's decay. As such, A µ is characterized by the momentum space distribution of the form f (q, t) = (B/q)exp(−q 2 ) with q ≡ p/T [40,[42][43][44][45]. Here B is a normalization factor. This yieldsβ A 1 whilẽ β thermal wdm 3.6 for the fermionic thermal WDM. On the other hand, the thermal WDM's temperature today is given by [46] T thermal wdm,0 where T γ,0 2.35 × 10 −4 eV is the photon temperature today. In addition, using Eq. (12), we obtain DM temperature today GeV . (18) Eventually equipped with the aforesaid information above, by equating β A with β thermal wdm , we obtain the lower 7 In references we cite here, the velocity variance is denoted by σ. But here we use β on behalf of σ. when we enter the lower bound (lb) of the thermal WDM mass into m thermal wdm,lb . Given m thermal wdm > 5.3 keV from Lyman-α forest observation [11], we show the parameter space resulting in the lower bound of m A smaller than 3keV as the region below the blue dotted line in Fig. 1. (Applying the conservative bound m thermal wdm > 2 keV [46,47], we also obtain the area below the cyan dotted line in Fig. 1.) 9 This confirms that the observed mass regime m A ∈ (2 keV,3 keV) is consistent with Lyman-α forest observation if A µ is the WDM produced based on the mechanism we discussed in our model. 8 Note that comparison of the velocity variance β (warmness) of the thermal WDM and A µ should be done at the matter-radiation equality time. Since both WDM decouple at a time earlier than the matter-radiation equality time, T A /T thermal wdm |a=a eq = T A ,0 /T thermal wdm,0 . 9 The same thing can be done for the case of 2keV. E. Combined constraints on V (Φ) In Fig. 1, for the exemplary value of m A = 3keV, we show the resultant parameter space of (m Φ , λ) obtained by applying the constraints on ∆N BBN eff and the free-streaming criterion. The gray shaded area shows the region where ∆N BBN eff 0.364 is satisfied. Each of solid lines of different colors is a set of points on (m Φ , λ) plane producing each specified free-streaming length of A µ based on Eq. (15). The region below the green dotted line makes the dark thermal bath formation possible while the region above the red dotted line makes it sure that φ becomes non-relativistic before it decays to A µ s. Finally the region below the dotted blue line produces the lower bounds of m A smaller than 3keV via the mapping of the lower bound on thermal WDM provided by the Lyman-α forest data, i.e. m thermal wdm > 5.3 keV [11]. When combined together, indeed the applied three constraints denoted by the dotted lines produce the overlapped region for λ 10 −2 and m Φ 10 14 GeV. The points residing in the overlapped region enables A µ to resolve the small scale problems as the WDM by producing λ FS (0.05 − 0.08)Mpc. As such, A µ in the model is really shown to be a non-thermally originated WDM causing XENON1T anomaly, consistent with the various existing cosmological constraints. V. DISCUSSION In this letter, motivated by the recently reported XENON1T excess, we propose a minimal model where the massive gauge boson of U (1) B−L gauge theory with the kinetic mixing 10 −15 can play the role of the dark photon dark matter with its mass m A ∈ (2 keV,3 keV) inducing XENON1T anomaly. As the DM candidate with the free-streaming length of order ∼ O(0.1)Mpc, A µ is classified as the WDM which can address the small scale problems that ΛCDM is suffering from. Relying on U (1) B−L gauge theory which is the well motivated extension of the SM, we introduced a scalar and three heavy right-handed neutrinos charged under U (1) B−L in addition to the SM particle content, which is the minimal set-up for ensuring the successful operation of the seesaw mechanism and leptogenesis. Imposing masses to the three-right handed neutrinos, the scalar was shown to be able to serve as the parent particle for the dark photon warm dark matter. Due to the non-thermal production mechanism we assumed in the model, the gauge boson mass of m A ∈ (2keV, 3keV) can be consistent with the Lyman-α forest observation. Intriguingly, we figured out that for the self-quartic interaction λ 10 −2 and the scalar mass m Φ 10 14 GeV, the model can produce the non-thermally originated dark photon warm dark matter consistent without running a foul of existing cosmological and astrophysical constraints as a source of XENON1T anomaly. Although we focus on U (1) B−L gauge theory, our DM production mechanism can be easily applied to a hidden Abelian gauge theory. Note added: After we finished this paper, we became aware of arXiv:2007.02898 [hep-ph] [48]. However, the mechanism to make the U (1) B−L gauge boson DM there is different from ours. See also references in their paper for proposals related to XENON1T anomaly.
5,171.6
2020-07-08T00:00:00.000
[ "Physics" ]
Effects of oocyte source, cell origin, and embryo reconstruction procedures on in vitro and in vivo embryo survival after goat cloning The birth of cloned goats has been well documented, but the overall goat cloning efficiency by somatic cell nuclear transfer procedures is still low, which may be further intensified in extreme environments. The aim of this study was to produce cloned goats under the conditions of the Brazilian SemiArid region, in a transgenic program for the expression of human lysozyme in the milk to target childhood diarrhea and malnutrition, comparing the effects of oocyte source, cell type, and embryo reconstruction procedures on in vitro and in vivo embryo survival after cloning by micromanipulation or by handmade cloning. The use of in vitro-matured oocytes resulted in more viable embryos after cloning than in vivo-matured cytoplasts, but no differences in pregnancy rates on day 23 were seen between oocyte sources (77.5 vs. 77.8%, respectively). The presence or absence of the zona pellucida for embryo reconstruction (78.8 vs. 76.0%, respectively) did not affect pregnancy outcome after transfer. However, pregnancy rate on day 23 was higher for embryos chemically activated by a conventional than a modified protocol (88.1 vs. 50.0%), and for embryos reconstructed with mesenchymal stem cells and fetal fibroblasts (100.0 and 93.3%) than with adult fibroblasts (64.7%). Although most pregnancies were lost, the birth of a cloned female was obtained from embryos reconstructed by micromanipulation using non-transgenic control cells and in vitro-matured oocytes with intact zona pellucida, after conventional activation and transfer at the 1-cell stage. Introduction Childhood diarrhea and malnutrition still are some of the major social problems in developing countries.This is especially true in less favorable regions such as the Semi-Arid area of Brazil, resulting in thousands of infant deaths worldwide each year (Boccolini et al., 2012).Several epidemiological studies have already demonstrated the benefits of breast feeding for the infant´s health, including passive immunedefense against infections by pathogenic microorganisms, growth stimuli to benign agents in the intestinal microbiota, development and maturation of the gastrointestinal tract, protection against asthma and allergies, and anti-inflammatory effects (Lönnerdal, 2003;Oddy, 2017).The positive effects of human milk to breastfed children are reflected in an improved general health, adequate growth and development including epigenetic beneficial changes, lower susceptibility to chronic and acute diseases during and after childhood, (American Academy of Pediatrics, 1997, Verduci et al., 2014), and lower incidence of infections of the gastrointestinal, respiratory and urinary tracts (Levy, 1998).Recently, breastfeeding for at least 12 months also been shown to be associated with improvement in children neurodevelopment, an increase in IQ scores, more schooling and higher salaries as adults (Victora et al., 2015;Lechner and Vohr, 2017).Such effects on the health of the young have been attributed to the presence of immunocompounds in human breast milk, such as lysozyme, lactoferrin and secretory immunoglobulin A, or IgA (Levy, 1998;Hassiotou and Geddes, 2015).The antimicrobial effects of human lysozyme and lactoferrin are considered as integral part of the passive immunity and defense against bacteria, viruses, parasites and fungi that is passed on to children through human breast milk (Mountzouris et al., 2002;Chow et al., 2016;Lönnerdal, 2016).Unfortunately, breastfeeding and the supply of such compounds to the infant are not permanent, which normally causes an impact on the child's health in unassisted populations.In contrast, the milk produced by livestock, such as goats, can be easily and continuously obtained, and used as a substitute for the nutritional properties of breast milk.However, lysozyme and lactoferrin are present in insufficient concentrations in animal´s milk to provide effective protection to humans (Stenfors et al., 2002;Krol et al., 2010).Therefore, the production of human immunocompounds in the milk of domestic animals through genetic engineering could contribute to human gastrointestinal health by modulating the resistance and susceptibility to various diseases, such as childhood diarrhea (Maga and Murray, 1995;Maga et al., 2006a, b).Because of this potential, the production of transgenic goats to express human lysozyme (hLZ) in the milk may have a great impact on society (Maga et al., 2006a, b;Cooper et al., 2015), especially for the population of less favorable areas in the world, such as the Brazilian Semi-Arid region.For such purposes, cloning by somatic cell nuclear transfer (SCNT) may be useful for the production of goats for the lysozyme and lactoferrin transgenic models (Meng et al., 2012). Goat cloning by SCNT has been established since late 1990s (Baguisi et al., 1999), but the successful application of the technology is still challenging, which is translated by the low overall efficiency (< 1 to 5%) of the process as a whole (Baldassare et al., 2004;Gavin et al., 2013) in more favorable regions of the world.Technical and biological aspects associated with such low efficiency are further intensified when facing climatically extreme and more challenging environments, such as the Brazilian Semi-Arid region.Even though goats are generally considered more adaptable to adverse conditions than other domestic animals, high temperatures and low rainfall also affect this species through the lack of good quality food, excessive temperatures, and the presence of toxic plants, among other factors to which the animals are constantly exposed to (Carneiro, 2008;Chaves et al., 2011).As a consequence, an overall decrease in reproductive efficiency may occur (Chaves et al., 2010(Chaves et al., , 2011)), which can make the production of a cloned animal even more challenging.The adjustment of SCNT cloning procedures in goats, therefore, gains crucial importance in this specific environment, given that no reports of cloned goats produced in Brazil and between parallels 30 o N and 30 o S in the world have been available prior to this study and to our previous recent report (Martins et al., 2016). The aim of this study was to optimize goat cloning procedures under the conditions of the Brazilian Semi-Arid region, using somatic donor cells transgenic for the hLZ gene, through experiments evaluating the in vitro and in vivo survival of goat embryos cloned by micromanipulation or Handmade Cloning (HMC), comparing different cytoplast sources (in vivo-or in vitro-matured oocytes), karyoplast types (adult fibroblasts, fetal fibroblasts and mesenchymal stem cells), and manipulation and reconstruction procedures for the production of goat cloned embryos. Materials and Methods All reagents and the water used for medium preparation were from Sigma Chemical Co.(St Louis, MO, USA), unless stated otherwise. Cytoplast source: in vitro-matured and in vivo-matured oocytes Two cytoplast (oocytes) sources were compared for the production of cloned goat embryos, either by micromanipulation or by HMC procedures, as below. In vitro maturation Goat ovaries were obtained post-mortem from pubertal adult goats and transported in DPBS (Nutricell, São Paulo, Brazil) to the laboratory in an insulated container at 33°C.Cumulus-oocyte complexes (COCs) were obtained by ovary slicing.Viable COCs, selected based on morphological quality adapted from Leibfried and First (1979), were in vitro-matured (IVM) for 22 ± 2 h, according to Pereira et al. (2013). In vivo maturation. Healthy pubertal adult goats were subjected to ovarian stimulation for the collection of in vivo-matured oocytes.For that, an intravaginal progesterone insert (Eazi-Breed CIDR ® , Laboratórios Pfizer Ltda., Brazil) was placed on day 0, with the replacement by a new one after 6 days.On day 10, a total of 180 mg pFSH (Folltropin-V ® , Bioniche, USA) was given IM, twice a day, for 3 days (36 and 36, 36 and 36, 18 and 18 mg, respectively).The progesterone insert was removed at the last FSH dose (day 12), and approximately 15 h after removal, a dose of 0.025 mg of gonadorelin acetate (Gestran ® , ARSA S.R.L., Argentina), an analogue of GnRH, was given IM.Twenty-two hours (22 h) following the GnRH dose, ovaries were exteriorized by laparoscopy for the aspiration of >4 mm follicles with a 10 ml syringe attached to an 18 G needle.Recovered oocytes were selected according to the expansion of the cumulus cells and the presence of the first polar body (PB) under a stereomicroscope. For both oocyte sources, after the removal of the cumulus cells and selection of matured oocytes (PB selection), a group of oocytes from each source was subjected to enzymatic zona pellucida (ZP) removal in 0.5% protease (P8811) solution, according to Ribeiro et al. (2009) and Pereira et al. (2013), for subsequent embryo reconstruction by micromanipulation without ZP or by Handmade Cloning (HMC), as described below.The other group of oocytes from each source was kept with intact ZP for cloning by micromanipulation with ZP. Type of karyoplasts The somatic cells (karyoplasts) used for cloning were isolated from goats from the University of California, Davis, USA (CTNBio/Brazil 3467/2012), from a human lysozyme (hLZ) transgenic line.Mesenchymal stem cells (MSCs), adult fibroblast cells Anim.Reprod., v.14, n.4, p.1110-1123, Oct./Dec.2017 (AF), and fetal fibroblast cells (FF) were used for cloning by micromanipulation, whereas only MSCs and AF were used for Handmade Cloning, as below.Briefly, MSCs were isolated from the bone marrow of a neonate male; FF from a 40-day male fetus; and AF were obtained after the ear biopsy of a pubertal adult female, according to Baguisi et al. (1999), Monaco et al. (2009) and Gerger et al. (2010), respectively.The MSCs were used at 60-70% confluence (passage 4), the FF at 80-90% confluence (passage 4), and the AF (passage 3) at >95% confluence.Except for MSCs, the FF and AF cell cycles were synchronized by contact inhibition (high confluence) after 3 to 5 days of in vitro culture. In a few cloning procedures (n = 3), fetal fibroblast cells obtained from a 40-day non-transgenic female fetus were used at low passage (P2) and high confluence (>95%) as controls for cloning procedures and to evaluate in vitro and in vivo embryo survival.Due to the low frequency of use of such cells for cloning, data after the use of control cells are not presented in comparative form with the other transgenic hLZ cells lineages. Analysis of the cell cycle A portion of the MSCs, AF, and FF cells was used for cloning by micromanipulation, while the remaining cells were processed for the determination of the cell cycle phase through flow cytometry.Cultures of MSC, FF and AF cells were isolated with 0.25% trypsin-EDTA and centrifuged twice in DPBS.Then, cell were treated with 10 mg/ml RNase A (R4875) and 100 µg/ml propidium iodide (PI, P4170) in a 2.94% sodium citrate solution and 0.1% Triton TM X-100 (T8787), for 30 min at RT. Cells were then centrifuged at 1500 g for 5 min, at 4°C, re-suspended in DPBS and immediately placed in a container with ice for the determination of the cell cycle phase (G0/G1, S, G2/M) by flow cytometry (FACSCalibur, Becton Dickinson, San Jose, CA, USA).Histogram plots were created using the Cell Quest software (Becton Dickinson).Percentage of cells within the various phases of the cell cycle were calculated using Cell Quest by gating G0/G1, S, and G2/M cell populations, with a scatterplot of red fluorescence (FL2-A x FL2-W). Experiment 1: Production of goat cloned embryos by micromanipulation: effects of the enucleation of in vitro-or in vivo-matured oocytes with or without the zona pellucida, reconstruction with distinct karyoplast types by membrane fusion or cellular micro-injection, and embryo activation with or without cytochalasin B Enucleation with or without ZP Groups of zona-intact (ZI) and zona-free (ZF) in vitro-or in vivo-matured oocytes were enucleated by micromanipulation.For that, oocytes were first incubated for 15 min in TCM-HEPES supplemented with 5 µg/ml cytochalasin B (C6762) and 5 µg/ml Hoechst 33342 (B2883).For ZI oocytes, conventional micromanipulation procedures were performed, according to Baguisi et al. (1999) and Keefer et al. (2001).For ZF oocytes, enucleation by micromanipulation was performed according to Oback et al. (2003). Reconstruction by cell fusion (CF) or by donor cell microinjection (CI) For the reconstruction of ZI embryos, the nucleus donor cells (MSC, AF or FF) were either transferred by micromanipulation to the perivitelline space of enucleated goat oocytes (reconstruction by cell fusion, CF), or were injected directly into the ooplasm (reconstruction by cell injection, CI), according to Keefer et al. (2001) and Chen et al. (2007), respectively.Prior to injection into the ooplasm, cells were consecutively pipetted with a 12 µm reconstruction pipette until a deformation of the cell membranes was visible. For the reconstruction of ZF embryos, enucleated structures were incubated for 2 to 3 min in 500-µg/ml phytohemagglutinin (PHA) solution so that the cytoplast could adhere to the karyoplast, under stereomicroscope.All ZF embryos were reconstructed by membrane fusion. For membrane fusion (ZI-CF and ZF-CF), reconstructed complexes were rinsed in fusion medium (Ribeiro et al., 2009), and then subjected to membrane fusion in an electrofusion apparatus (BTX Electro Cell Manipulator 200, Biotechnologies & Experimental Research Inc., USA San Diego, CA, USA) coupled to a 320 μm fusion chamber (BTX453, BTX Instruments, Genetronics, San Diego, CA, USA).The ZI structures were fused by two 2-kV/cm DC pulses for 20 μs, whereas the ZF structures received two 1-kV/cm DC pulses for 20 μs.Fusion rates were assessed 45 to 60 min after fusion.Non-fused structures were subjected to a second round of electrofusion. Use of cytochalasin B during the embryonic activation Reconstructed embryos were submitted to two different protocols for embryo activation, based on Dutta et al. (2011), for protocol 1, and on Wells et al. (2011), for protocol 2, as follows.For the conventional protocol, or protocol 1, cloned embryos were exposed for 5 min to 5μm ionomycin solution (I0634).Embryos were then incubated at 38.5ºC for 4 h in TCM199 supplemented with 2 mm 6-DMAP (D2629).For the modified protocol, or protocol 2, structures were incubated for 2 h in 2.5 µg/ml cytochalasin B (CB) immediately after fusion evaluation, followed by the activation in 5 µm ionomycin for 1 min.Then, embryos were incubated in 2 mm 6-DMAP for 4 h.Finally, cloned embryos were in vitro-cultured, as described below. Experiment 2: Production of goat cloned embryos by handmade cloning: effects of the cytoplast source, karyoplast type, and final embryonic cytoplasmatic volume Procedures for HMC were adapted from Ribeiro et al. (2009) for cattle and Pereira et al. (2013) for goats. Cytoplast source In vitro-and in vivo-matured COCs were subjected to enzymatic removal of the zona pellucida in 0.25% protease, as above. Embryonic cytoplasmatic volume Zona-free oocytes were sectioned manually in 2.5 µg/ml cytochalasin B, depending on the presence or absence of the polar body (PB) or a protrusion cone (PC), indicative of the location of the MII plate.Oocytes without PB or PC were bissected in halves of equal sizes and volumes (50% of the volume), whereas oocytes with PB or PC were sectioned at the extremity next to the PB or PC, resulting in portions of approximately 85 and 15% of the original volume, with the smaller portion containing the MII plate.All hemioocytes were selected by the presence (nucleated) or absence (enucleated) of the MII plate under UV light, in TCM199 + 10% of FBS + 10 µg/ml Hoechst 33342.Embryos were reconstructed either with two 50% hemioocytes + donor cell (50% + 50% + cell) or one 85% hemi-oocyte + donor cell (85% + cell). Karyoplast type Single hLZ-derived MSC or AF cells were used as karyoplasts for embryo reconstruction by attachment to enucleated in vivo-or in vitro-matured ZF hemi-oocytes with 50 or 85% cytoplasmatic volume, after a brief exposure to PHA solution, as aforementioned.Structures reconstructed by HMC were fused under the same fusion procedures as described above for ZF-embryos, followed by chemical activation by the conventional protocol (protocol 1), as above.HMC-derived embryos were in vitro-cultured, as described below. In vitro culture (IVC) Cloned embryos reconstructed by micromanipulation or by HMC were in vitro-cultured in modified SOFaa medium (Holm et al., 1999) supplemented with 5% FBS + 0.3% BSA and 1% ITS, at 38.5°C with 100% relative humidity, and a gas mixture containing 5% CO 2 , 5% O 2 and 90% N 2 (Ribeiro et al., 2009).For ZI cloned embryos, 15 to 20 structures were cultured in 100 µl drops; in turn, ZF cloned embryos were cultured in a modified WOW system (Vajta et al., 2000;Feltrin et al., 2006) in 4-well dishes containing 500 µl IVC medium.Prior to transfer to synchronous female recipients, cloned embryos reconstructed by micromanipulation were in vitrocultured for approximately 18 h, whereas embryos reconstructed by HMC were in vitro-cultured for 7 days to the blastocyst stage. In some procedures (n = 6), groups of ZI oocytes were kept under the same conditions as the structures reconstructed during cloning, to be chemically activated (conventional protocol, or protocol 1) and in vitro-cultured for 7 days, under the same conditions described above (control group by parthenogenesis). Embryo transfer (ET) and pregnancy diagnosis: embryos reconstructed by micromanipulation Cloned embryos at the 1-cell stage were transferred to the oviduct of recipient females on day 1 of the cycle by semi-laparoscopy, approximately 8 h after the LH dose.The mean number of embryos transferred per female was 13.4, with a variation of 11 to 25 embryos per recipient.On the 4th day after the embryo transfer, an intravaginal progesterone insert (Eazi-Breed CIDR ® , Laboratórios Pfizer Ltda., Brazil) was placed in the female recipients and remained until pregnancy diagnosis.The progesterone insert was replaced weekly until day 140 of the pregnancy, or until the detection of a non-viable pregnancy (no pregnancy after diagnosis or after detection of conceptus death). Embryo transfer (ET) and pregnancy diagnosis: embryos reconstructed by HMC Cloned goat embryos on days 7 of development were transferred to synchronous female recipients (4 to 6 embryos/female), by semilaparoscopy, to the uterine horn ipsilateral to the ovary with a functional corpus luteum, according to Melican and Gavin (2008).This group of female recipients did not receive any progesterone supplementation (intravaginal inserts) after the transfer of embryos. Pregnancy diagnosis was performed on day 23 by rectal ultrasonography using a 6 MHz linear transducer.Pregnancies were monitored by ultrasound scanning every 3-4 days until no sign of pregnancy was displayed or for confirmation of pregnancy viability.For viable pregnancies, a transabdominal ultrasound examination was repeated at weekly intervals from the 35th day of pregnancy through term.The presence of one or more embryos or fetuses, detectable heart beat, embryonic or fetal membranes, and placentomes were examined qualitatively. Data analysis Data relative to in vitro survival, fusion and Anim.Reprod., v.14, n.4, p.1110-1123, Oct./Dec.2017 pregnancy rates were compared between the experimental groups by the Chi-square test (Minitab, State College, PA, USA), for P < 0.05, for cloning by micromanipulation (experiment 1) or by HMC (experiment 2), considering oocyte source (in vivo vs. in vitro) and cell type (MSC vs. FF vs. AF for micromanipulation, and MSC vs. AF for cloning by HMC).For cloning by micromanipulation, the analyses also considered the type of manipulation (ZI vs. ZF), reconstruction (CF vs. CI), embryonic activation protocol (1 vs. 2), and the proportion of cells at different phases of the cell cycle (G0/G1 vs. S vs. G2/M), whereas for cloning by HMC, the analyses also included data on the final cytoplasmic volume (85 vs. 100%).Data regarding the number of retrieved COCs per animal, for both cloning methods, were compared by the Students´ test (P < 0.05). Cytoplast source For in vitro-matured oocytes, after 16 replications, a total of 4,138 immature COCs (19.9 COCs/goat) were recovered by post-mortem ovary slicing from 415 ovaries collected from non-stimulated slaughterhouse does.Upon morphological selection, 20 grade I (1.0%), 380 grade II (18.9%), 1,255 grade III (62.4%) and 509 grade IV (20.2%) oocytes (2,164 viable COCs, 52.3%) were in vitro-matured (10.4 COCs/female).After IVM, the maturation rate, based on the presence of the PB, was 83.6% (17/20), 60.2% (228/380), 43.5% (546/1255) and 30.2% (153/509), for grades I, II, III and IV COCs, respectively, for a total maturation rate of 48.4% (1,047/2,164) and mean of 5.0 matured oocytes/female.For in vivo-matured oocytes, after seven replications, a total of 974 COCs (14.3 COCs/female) were recovered after the in vivo aspiration of pre-ovulatory follicles from 136 ovaries from pFSH-stimulated females.Upon selection, 937 COCs (96.2%) had cumulus cells expansion (13.8 oocytes/goat), and 741 oocytes displayed the extrusion of the 1st PB, resulting in a maturation rate of 52.6% and 7.3 mature oocytes/female.The maturation rate and the number of matured oocytes/female were significantly higher in the group of in vivo-matured oocytes when compared with the group of in vitromatured COCs (P < 0.05).However, when used for embryo reconstruction by micromanipulation, survival rate after enucleation and after reconstruction per se was higher in the in vitro-matured group than the in vivomatured counterpart (Table 1), with no differences observed in pregnancy rates between groups. A total of 120 in vitro-matured oocytes obtained from samples from each replication were chemically activated and in vitro-cultured as control for the manipulation process per se and for oocyte quality/competence, from which, 75 cleaved (62.5%) and 22 reached the morula/blastocyst stages (18.3%) on day 7 of development.numbers in each row followed by different superscripts differ, for P < 0.05. Type of karyoplast and cell cycle phase No differences were observed in fusion rates and in the number of viable embryos for IVC between cell types used for the production of cloned embryos (MSC, FF, AF).After the IVC, the number of viable embryos was greater in the group derived from FF cells than in the other groups.However, pregnancy rate was higher in the groups of embryos produced using FF and MSCs when compared with AF cells (Table 2).Differences (P < 0.05) were detected between cell types regarding the distribution in the phases of the cell cycle, as depicted in Figure 1.The synchronization of the cell cycle in the G0/G1 phase in the adult fibroblast (AF) group was greater than in the fetal fibroblast (FF) group, which, in turn, was greater than for mesenchymal stem cells (MSC), which showed a relationship with the mean cell confluence for each cell type (>95, 80-90 and 60-70%, respectively) when used for cloning.100.0 a a,b numbers in each row followed by different superscripts differ, for P < 0.05. Removal of zona pellucida Considering the presence or removal of the ZP, the post-enucleation survival, the fusion/microinjection rate and the number of viable embryos after reconstruction by micromanipulation were higher in the ZI group when compared to the ZF group.However, no differences were seen in pregnancy rates between groups (Table 3). Method for nucleus donor transfer The survival rate after reconstruction by micromanipulation was greater in the group of cell microinjection (CI) into the ooplasm when compared with the group of cell fusion (CF), irrespective of the presence (ZI) or absence (ZF) of the ZP (P < 0.05).However, no significant differences in pregnancy rates were observed between groups (Table 4).76.0 a a,b numbers in each column followed by different superscripts differ, for P < 0.05.ZI: zona intact; ZF: zona free; CF: cell fusion; CI: cell injection. Activation protocol When comparing the activation protocol, pregnancy rates on day 23 after the reconstruction by micromanipulation was higher in the group of embryos activated by the Conventional Protocol, or protocol 1, when compared with the activation protocol with CB, or protocol 2 (Table 5). Type of karyoplast In general, the type of karyoplast or cytoplast did not affect any in vitro and in vivo embryonic development parameter, with no significant differences detected between groups regarding in vitro embryo development until day 7 (Table 7).When analyzed separately, embryo reconstruction using mesenchymal stem cells (MSC) resulted in higher fusion rates than using adult fibroblasts (160/198, 80.8% vs. 119/191, 62.3%, respectively), with no differences in re-fusion (24/43, 55.8% vs. 28/69, 40 a,b numbers in each column followed by different superscripts differ, for P < 0.05.MSC: mesenchymal stem cells.AF adult fibroblasts. In vivo embryo development and birth of a cloned goat Embryos reconstructed by micromanipulation Collectively, 782 cloned embryo were transferred on day 1 of development to the oviduct of 58 synchronous recipient females, resulting in 45 pregnancies (45/58, 77.0%) on day 23 of gestation.However, all established pregnancies with transgenic human lysozyme (hLZ) cells, in all groups and subgroups, were lost before the fetal phase (up to day 45) of gestation.Nevertheless, after the induction of parturition following our established protocol (Chavatte-Palmer et al., 2013), a viable cloned female was born by elective Caesarean section after 147 days of gestation from the transfer of 27 embryos cloned with non-transgenic control cells to two female recipients.The cloned female was generated from the reconstruction by micromanipulation with control cells (non-transgenic fetal fibroblasts), using in vitro-matured oocytes with zona pellucida (ZI), reconstructed by membrane fusion (CF), activated by the conventional protocol (P1) and transferred on day 1 of development to the oviduct of a female recipient that had received a vaginal progesterone insert throughout pregnancy. Embryos reconstructed by HMC. A total of 96 goat blastocysts were transferred on day 7 of development to the uterus of 19 synchronous females.The pregnancy diagnosis, performed by ultrasound on day 23 of gestation, resulted in three pregnancies originated from in vivo-(n = 1) and in vitro-matured (n = 2) oocytes, from which, two were obtained using mesenchymal stem cells and one through the use of adult fibroblasts.The three pregnancies were lost before day 45 of gestation. Discussion Although cloning by SCNT is well established in goats, with birth rates similar to those found in other species, there are no reports of cloned goats born in the tropics of the world, between parallels 30 o N and 30 o S, with all cloned goats born in countries under temperate or subtropical climates (Baguisi et al., 1999;Keefer et al., 2001;Reggio et al., 2001;Ohkoshi et al., 2003;Lan et al., 2006;Chen et al., 2007;Folch et al., 2009;Akshey et al., 2010;Colato et al., 2011;Liu et al., 2011;Nasr-Esfahani et al., 2011;Wells et al., 2011;Meng et al., 2012;An et al., 2012;Zhou et al., 2013;Yuan et al., 2014;Feng et al., 2015;Hosseini et al., 2015;Zhang et al., 2015;Yang et al., 2016a;Bai et al., 2017).This fact may be associated with the low productive and reproductive indexes of goat herds in tropical countries, such as in the Brazilian Semi-Arid region, where the annual birth rate in goats does not exceed 20% (Guimarães, 2006). The oocyte quality and competence play a Anim.Reprod., v.14, n.4, p.1110-1123, Oct./Dec.2017 crucial role in the success of a cloning program, since the ooplasm is responsible for reprogramming the nucleus donor, which has an important effect on subsequent development (Fissore et al., 1999;Kelly et al., 2007;Mohapatra et al., 2015).Normally, better quality oocytes, usually grades I (GI) and II (GII), are selected for in vitro maturation (Chen et al., 2007;Tang et al., 2011).When analyzing data regarding the quality of immature COCs in this study, less than 20% of selected COCs were rated as GI and GII, with a large portion of the COCs lacking or having little cumulus cells vestment.Previous studies have shown that body condition score, physiological conditions of the egg donor, breed, age and individual variations directly interfere with the quality of the recovered COCs (Edwards and Hansen, 1996;Vinoles et al., 2002;Fatehi et al., 2005;Cecconi et al., 2007).In addition, the absence of significant variation in the photoperiod throughout the year, the low rainfall, or even the high temperatures in equatorial or tropical zones, may cause a reduction in the quality of goat COCs (Jordan, 2003;Chaves et al., 2010Chaves et al., , 2011)).According to Roth and Hansen (2004), inter-and intra-cellular components define how an oocyte will react to effects from the environment and high temperatures, even if within physiological ranges, potentially even being a stimulus to apoptosis in mammalian oocytes.A fact that corroborates this assertion is that the rate of development of parthenogenetic embryos to the morula and blastocyst stages obtained in our experiment was 18.3%, a low value for oocytes previously selected for the presence of the 1st PB extrusion and cytoplasm morphology when compared with studies by Apimeteetumrong et al. (2004) and Nasr-Esfahani et al. (2011), who obtained 42.3 and 54.9% of parthenogenetic development to the morula and blastocyst stages, and to the blastocyst stage, respectively.The animal response to climatic elements and factors may play an influence in the results of this study, resulting in a low overall efficiency of cloning under our conditions.Nevertheless, we obtained approximately 30% of embryonic development to the blastocyst stage after VC of embryos cloned by HMC.However, these embryos exhibited low morphological quality (data not shown), which is commonly reflected in lower rates of in vivo development (Pereira et al., 2013), as observed in this study. Based on the morphological features of the collected COCs, one of the alternatives attempted to improve results was the use of in vivo-matured oocytes.However, this issue seems to be controversial, since while Reggio et al. (2001) found no differences between in vitro-and in vivo-matured oocytes, showing that both oocyte sources were similar in competence to support in vivo development after goat cloning, whereas Behboodi et al. (2004) and Martins et al. (2016) did not obtain pregnancies after the transfer of cloned embryos using in vitro-matured oocytes.Unlike Reggio et al. (2011), our experiment found that nuclear maturation rate was higher in the group of in vitro-matured oocytes than the in vivo-matured counterparts.However, the nuclear maturation rate only takes into account the extrusion of the 1st PB, which is not the only factor to be considered to determine oocyte quality and competence.As for findings by Reggio et al. (2011), no differences in pregnancy rates were observed between the oocyte source, which may indicate that the in vitro protocols for nuclear maturation are rather well established for goats, with attained pregnancy rates similar (Baldassare et al., 2004) to those found for other species. In this study, two distinct SCNT cloning micromanipulation methods were compared for the production of clone embryos, with the use of conventional cloning with (ZI) or without (ZF) zona pellucida (ZP).Cloning by micromanipulation with the ZP is in more widespread use in the world for goats, with the ZP maintained until the end of the procedure (Keefer et al., 2001;Chavatte-Palmer et al., 2013).However, this technique requires greater skills from the operator than the zona-free method, since the presence of the ZP imposes an extra challenge for the aspiration of the MII plate and the PB (Peura, 2003).The ZF technique, on the other hand, should be an easier process, as the enucleation is more straightforward, which is further facilitated as there is no need for another micromanipulation step for the reconstruction of embryos by cell insertion (Booth et al., 2001;Hosseini et al., 2015).Although embryo production rates using both techniques are similar, the ZF procedure enables the production of a greater number of embryos per routine (Booth et al., 2001;Peura, 2003).In our case, the post-enucleation survival, the fusion/microinjection rate and the number of viable embryos were higher in the ZF group than the ZI group.However, no differences were observed in pregnancy rates between groups. The cell type, cycle synchronization, lineage and time in culture, among other factors, are known to be crucial for the cloning outcome (Dominko et al., 1999;Yang et al., 2016b).Keefer et al. (2001) used three different GFP transgenic cell lines to clone goats, and found that only one line was capable of producing viable animals.In that same study, the group used five different lines of fetal fibroblasts, and only two were able to generate viable cloned animals.According to Baldassare et al. (2004), pregnancy rates varied from 0 to 89% when using different cell lines for SCNT cloning, demonstrating the high variability in results when different cell lines are used.In our experiment, after the IVC period, the number of embryos suitable for transfer was higher in the group of fetal fibroblasts (FF) than the other groups.However, pregnancy rates were higher for embryos produced with fetal fibroblasts (FF) and mesenchymal stem cells (MSC).In addition, similarly to what was observed by Chen et al. (2007), survival rate after reconstruction was greater when cells were microinjected in the ooplasm than cell fusion, irrespective of the presence or removal of the zona pellucida. In the group of embryos produced by the HMC, the final cytoplasmic volume of 100% resulted in higher cleavage rates than with 85%.We have previously seen that the reduction of cytoplasmic volume to 50% of the final volume significantly compromises in vitro development and embryo kinetics of cloned bovine embryos, with the reduction of the total number of cells in blastocysts (Ribeiro et al., 2009).Because the cytoplast plays a key role in chromatin remodeling, the effect of the cytoplasmic volume after cloning cannot be neglected.Previous studies also corroborate the effect of the reduction or increase of the cytoplasmic volume on embryonic development.The removal of 50 or 25% of ooplasm during enucleation compromised embryonic development and quality and the total number of cells in cloned bovine blastocysts (Westhusin et al., 1996;Peura et al., 1998;Ribeiro et al., 2009).In conditions where the volume is reduced, the amount of ooplasmic components probably will not be sufficient to support cleavage, activation of the embryonic genome, or even cavitation, but since the cytoplasmatic volume does not increase during the first cycles of cell division, the total number of cells tends to be limited by the total volume of the developing embryo (Westhusin et al., 1996;Ribeiro et al., 2009). Although embryonic vesicles have been observed from day 23 of development in all groups, no heart beats could be observed in most cases.Such nonviable structures often remained until day 50 of gestation, when the progesterone inserts were removed.After a few days, the structures could no longer be observed.These findings corroborate with Baguisi et al. (1999) and Zhang et al. (2010).According to Baguisi et al. (1999), more than 2/3 of their clone pregnancies were not viable, and such "embryonic structures" were observed in the uterus until day 55 of gestation.In our case, out of 45 embryonic structures, the heartbeat could only be observed in five cases.Several factors may have contributed to these findings, including failures in placentation and/or embryonic genome activation, or even in the enucleation process, which could lead to the transfer of polyploid embryos in rare cases (Baguisi et al., 1999).Collectively, the overall efficiency of cloning under our conditions was 0.11%, considering the number of transferred embryos (1/809) to obtain one live born animal, which is significantly lower than what was previously reported in the literature (Keefer et al., 2001(Keefer et al., , 2002;;Baldassare et al., 2004), even for transgenic cloned kids (Gavin et al., 2013;Feng et al., 2015). The high rate of pregnancy losses observed in this study may have been caused by the cell types and lines used for cloning, the low oocyte quality and inefficient genomic reprogramming, and even by technical aspects inherent to SCNT cloning per se.More studies are needed to investigate such aspects, as observed during this experiment.In addition to the potential failures, the high pregnancy rates verified in this work in all groups may also be related to the use of the progesterone insert from the 4th day after the embryo transfer, which could have prevented the return to the natural estrous cycle, 'rescuing' less viable embryos that would otherwise be unable to trigger the maternal recognition of pregnancy, an event already proposed by Bertolini et al. (2002) for in vitro-derived bovine embryos.In fact, a pilot study carried out by our group using progesterone supplementation (intravaginal inserts) on day 4 after the artificial insemination of female goats as a way to increase pregnancy rates resulted in 41.7% (5/12) and 80.0% (8/10) pregnancy in the control and in the progesterone-treated groups, respectively (Feltrin & Bertolini, 2011, University of Fortaleza;unpublished data).This pilot study indicated the innocuity or even the potential benefit of a progesterone treatment to improve fertility in cyclical pubertal does.Since in vitro-manipulated embryos have a lower viability than normal, being smaller in size at early embryonic stages (Bertolini et al., 2002;Martin et al., 2007), it is possible that the rescue of some less viable embryos may have occurred (Bertolini et al., 2002), resulting in higher pregnancy rates than that reported in the literature for cloned goat embryos (Chavatte-Palmer et al., 2013). Despite the low overall efficiency of cloning by SCNT observed in this study, especially regarding birth rates, we report the birth of a cloned goat female in August 2012, from control non-transgenic cells, using the micromanipulation of in vitro-matured oocytes with zona pellucida, membrane fusion, conventional activation, and transfer at the 1-cell stage embryo to the oviduct of a recipient female receiving progesterone supplementation throughout pregnancy.The important information generated in this study may serve as a basis for subsequent studies, which may contribute in the future to a greater efficiency in the production of transgenic cloned animal models in arid regions of the world, and to models that can assist in improving the quality of life of the population, such as milk-containing human lysozyme.In this sense, studies that take into account the physiology, nutrition, health, and reproductive aspects, among others (Bertolini, 2009), are required to uncover factors associated with lower reproductive performance in goat herds in the Brazilian Semi-Arid region. Table 1 . Survival rates after manipulation and embryo reconstruction following cloning by micromanipulation in goats using oocytes obtained either by in vitro-maturation after postmortem oocyte collection from nonstimulated females, or by in vivo-maturation after in vivo oocyte collection from FSH-stimulated females. Table 2 . In vitro and in vivo survival of reconstructed embryos using fetal fibroblasts (FF), adult fibroblast (AF), or bone marrow-derived mesenchymal stem cells (MSC) as nucleus donor cells for cloning by micromanipulation in goats. Table 3 . Effect of the presence (ZI) or removal (ZF) of the zona pellucida on survival after enucleation and embryo reconstruction by micromanipulation procedures for cloning by SCNT in goats. Table 4 . In vitro survival and pregnancy outcome of goat cloned embryos after embryo reconstruction by micromanipulation and transfer to female recipients on day 1 of development Table 5 . In vitro survival and pregnancy outcome of goat cloned embryos after embryo reconstruction by micromanipulation and embryo activation using either a conventional or a modified activation protocol. Table 6 . Recovery and maturation rates using oocytes obtained in vivo from pFSH-stimulated females (in vivo maturation) or postmortem from nonstimulated females (in vitro maturation) for embryo reconstruction by Handmade Cloning.
8,761.2
2017-01-01T00:00:00.000
[ "Biology" ]
New Clustering Schemes for Wireless Sensor Networks In this paper, two clustering algorithms are proposed. In the first one, we investigate a clustering protocol for single hop wireless sensor networks that employs a competitive scheme for cluster head selection. The proposed algorithm is named EECS-M that is a modified version to the well known protocol EECS where some of the nodes become volunteers to be cluster heads with an equal probability. In the competition phase in contrast to EECS using a fixed competition range for any volunteer node, we assign a variable competition range to it that is related to its distance to base station. The volunteer nodes compete in their competition ranges and every one with more residual energy would become cluster head. In the second one, we develop a clustering protocol for single hop wireless sensor networks. In the proposed algorithm some of the nodes become volunteers to be cluster heads. We develop a time based competitive clustering algorithm that the advertising time is based on the volunteer node's residual energy. We assign to every volunteer node a competition range that may be fixed or variable as a function of distance to BS. The volunteer nodes compete in their competition ranges and every one with more energy would become cluster head. In both proposed algorithms, our objective is to balance the energy consumption of the cluster heads all over the network. Simulation results show the more balanced energy consumption and longer lifetime. INTRODUCTION The sensor network is a collection of small-size, low-power, low-cost sensor nodes that have some computation, communication, storage and even movement capabilities.These nodes can operate unattended, sensing the environment, generating data, processing data, and providing the data to users.With these features, sensor networks have been adopted in many pervasive computation and communication scenarios such as remote surveillance, habitat monitoring, and so on [1,2]. The deployment of wireless sensor networks in many application areas, e.g., aggregation services, requires self-organization of the network nodes into clusters.In these cases, sensors in different regions of the field can collaborate to aggregate the information they gathered.For instance, in habitat monitoring applications the sink may require the average of temperature; in military applications the existence or not of high levels of radiation may be the target information that is being sought.It is evident that by organizing the sensor nodes in groups i.e., clusters of nodes, we can reap significant network performance gains.Clustering not only allows aggregation, but limits data transmission primarily within the cluster, thereby reducing both the network traffic and the contention for the channel. In this case, the gathered data from each node is processed locally and aggregated in a central coordinator referred to as a clusterhead (CH) and the redundant data (if any) is omitted to provide more accurate reports about the local region being monitored.In addition, data aggregation reduces the communication overhead in the network, leading to significant energy savings.Node clustering is an efficient network organization in order to support data aggregation and it improves network lifetime [3]. Direct transmission (single hop) and hop-by-hop (multi hop) transmission are two basic communication patterns in wireless networks.It was noticed that in the case of single hop communication the furthest sensors tend to deplete their energy budget faster than other sensors.In other words in direct transmission where packets are directly transmitted to the sink without any relay, the nodes located farther away from the sink have higher energy burden due to long range communication, and these nodes may die out first.To achieve balanced energy consumption, an elegant solution is to do make the clusters' size related to the energy consumption of the cluster heads.Thus smaller clusters are needed in the further distances from the BS in order to save more energy for the cluster heads.The main contribution in this paper is to gain energy balancing through clustering in such single hop networks. With this motivation, we propose a clustering protocol as the first proposed algorithm that employs level's total energy and their distances to BS to form clusters.We point to a competitive algorithm proposed in [4] named EECS, for selecting CHs among many other tentatively selected nodes.In the proposed algorithm that we name it EECS-M (modified EECS) some of the nodes become volunteers to be CH in the network.Then they start broadcasting a competition message in their pre-assigned competition range is (in contrast to EECS using a fixed competition range) a function of distance to BS.Every other volunteer node in its competitive range would quit the competition, if they have less energy. Cluster architecture simplifies topology management and reduces the number of sensor nodes contending for the channel access.However, a CH drains its energy more quickly than ordinary nodes due to the more computational and operational activities.Thus, A CH election algorithm must be distributed, energy-efficient, and load balanced [3].With this motivation, we also propose a clustering algorithm as the second one in this paper that employs node's residual energy for CHs' selection and their distance to BS to form clusters.We point to EECS as a competitive algorithm for selecting CHs among many other tentatively selected nodes.A variable competition range, which is derived from a recursive formula, is chosen which is proportional to the cluster's size.As it is shown later, these unequal clusters make the load to distribute evenly on the whole network and results in more nodes' longevity.In the second proposed algorithm, the whole network's lifetime is the time until the first node in the network runs out of energy. In fact in the second proposed algorithm, we developed a competitive clustering algorithm that uses a time based advertising procedure.Initially some of the nodes become volunteers to be CH in the network.Then they start advertising in the network using a timing schedule that is a function of energy.Every node that has more residual energy would start advertising sooner.Each node is also assigned a competitive range that may be fixed or variable as a function of distance to BS.Every other volunteer node in its competitive range would stop advertising, if they have less energy.Otherwise they will wait for their turn for advertising. The remainder of this paper is outlined as follows: in section 2, we present related work on some previous well-known clustering schemes proposed in the literature.In section 3 a common model for WSNs is introduced and network's operation stages including cluster selection/formation phase and data gathering/reporting phase are described for both algorithms.Section 4 focuses on the proposed algorithms in details.Simulation results are also included in section 5. RELATED WORK Lots of methods and researches are dealing with energy efficient issues and to prolong the lifetime of the wireless sensor networks and in the past few years, many clustering algorithms have been proposed for ad hoc and sensor networks aiming to improve the energy efficiency.LEACH [5] is an application-specific clustering protocol that utilizes random selection and frequent rotation of CHs for distribution of the total load into all nodes.The clustering process involves only one iteration, after which a node decides whether to become a CH or not and nodes take turns in carrying the CH's role.The data communication in LEACH is based on single-hop communication model.The author also proposed two variants of LEACH, which are referred to as LEACH-C (LEACHcentralized), and LEACH-F (LEACH with Fixed clusters).AROS [6] is a new version of LEACH which uses asymmetric communication with a semi-centralized clustering algorithm.The author demonstrated that AROS improves communication energy efficiency when the network size increases.HEED [7] selects CHs through O(1) time iteration according to a hybrid of nodes' residual energy and another parameter such as node proximity to its neighbors or node degree. The author in [4] presented EECS, which is a novel approach to distribute the CHs uniformly across the network through a localized single hop communication with little overhead.A competitive algorithm is suggested for CHs selection phase and a fixed competition range is specified to each volunteer node.A weighted cost function is also introduced to manage the number of cluster members.Every node, which finds a more powerful node in its competition set, will give up the competition immediately and broadcast its QUIT acknowledgment message.Any node that finds itself more powerful than the others in its competition radius will introduce itself as a CH and broadcast its advertisement message.The message complexity in this algorithm makes trouble in the dense networks for having too many nodes competing for being CH.A similar approach is performed in [8] where the authors applied a variable competition radius to the tentative nodes.An energy balancing criterion for every CH is used to derive a recursive equation for its competition range based on the nodes distance to the BS. In [9], several energy-efficient communication protocols have been proposed based on power control and load balancing, aiming at even distribution of the residual energy of the sensors and thus prolonging network lifetime.Dagher,et. al. [10] presented a theory for maximizing the lifetime of multihop WSN.An optimal centralized solution was presented in the form of an iteration algorithm.In [11], a taxonomy and general classification of published clustering schemes was presented.The authors surveyed different clustering algorithms for WSNs.In [12], an investigation about cluster size and the number of cluster heads in the region was achieved when all the devices in a WSN are deployed randomly. In this paper, since we are not to design an efficient MAC layer, an ideal simple MAC layer is recommended that is collision-free and uses a TDMA schedule for nodes' data communication.Our goal is to balance the energy consumption in the whole network in such a way that network's lifetime is increased. SYSTEM MODEL To illustrate the impact of the physical limits of sensor networks on the design of our algorithms we briefly discuss related wireless network model, which highly depends on the application.We consider a network with characteristics as below:  The sensor nodes are homogenous and they are uniformly distributed in a square field.  The sink and the sensor nodes are assumed to be static once deployed.  The sink node or Base Station (BS) is sited outside of the square field.  All sensor nodes are able to set their power transmission according to the distance to the destination.  Sensor nodes are assigned a unique ID and they are fully synchronized via a synchronization beacon broadcasted from the BS at the beginning of every round. A. Network's Operation and Data Gathering Model For most data gathering applications, the sensors usually operate in a low-duty-cycle mode.The interval between one duty cycle to the next may be several minutes, hours even days.This characteristic motivates the utilization of periodical sleeping to conserve energy.Each CH acts as a local coordinator of data transmission in the cluster.It sets up a TDMA schedules for all the cluster's nodes to ensure that there is no collisions among data messages.This schedule also allows the radio component of relative nodes to be turned off at all the times except during their transmit time [5].The data reporting process, which sometimes is called "steady state phase", is divided into rounds.In each round, any non-CH nodes in a cluster send their raw data obtained from sensing environment directly to their respective CHs.The nodes are also supposed to embed their residual energy information in the same packet.This information would be useful to compute the average network's energy in the BS. The CHs nodes are responsible for coordinating the members in their clusters and communicate with the sink node on behalf of their clusters and via a single hop communication.In addition, CHs aggregates the received data from its members with its own gathered data to compress the amount of data, which is to be sent to the BS.The data aggregation increases the performance of the network in terms of communication cost.However, the optimality of performing data aggregation is beyond the scope of this paper.At the end of each round, the clustering phase will restart and the new CHs are selected for the next round. B. Energy Consumption Model in Single Hop Networks In our model, we assume that each sensor can intelligently choose the transmission power based on the link distance.This is true in typical sensor node implementations.A simple model is described in [5] where each node dissipates energy to run the radio electronics and power amplifier in transmitter side.The power attenuation is a function of the distance between transmitter and receiver.There is a cross over distance that can be simply used to model the propagation loss.Thus, to transmit a l-bit message to a distance d, the radio expends: where the E elect is the energy required for running the electronic circuits.ε freespace and ε multipath are two parameters which depend on the noise figure and the required SNR for proper signal detection at the receiver.It can also be written in a more general from as: where p and q are constants related to node energy dissipation to run the radio electronics and power amplifier in transmitter and α is the path loss factor. Besides, there are two more sources of energy consumption of the CHs.Each CH consumes energy for receiving data from adjacent nodes or cluster members and fuses it into a single packet that is outlined by: where in both equations, m is the number of cluster's members and EBF is the computed energy for beam forming data aggregation [5]. THE PROPOSED ALGORITHMS In this paper, we propose two clustering algorithms which result in more balanced energy consumption and longer lifetime. Details of First Algorithm (EECS-M) As discussed previously, any detected event must be reported to the BS in the form of data packets.In some applications such as fire detection, the event must be reported as soon as possible and too much latency is unacceptable.The recommended communication model for such networks is single hop communication.Nevertheless, further nodes consume more energy for data reporting. A. Determining the Length of Levels Figure 1 depicts a simple network which is divided into three radial levels with the BS at the center.The clusters located in the further levels seem to be smaller than the closer clusters.Thanks to having more distance to the BS, the CH needs to spend more power to send its own packet.Having smaller cluster formed with less number of members, the CH can save most of its energy for data communication to the BS.On the other hand, since the closer CH nodes do not consume a large amount of energy for data communication to the BS, they are able to form larger clusters to cover more area and support more number of members.The contribution is to gain energy balancing for each individual level.The total energy consumption of the i th level is the sum of the consumed energies of every single cluster in that level.Considering k number of clusters in a level, the total level's energy is: Where E cluster (the total energy consumed in each cluster) is composed of the total energy consumed by the member nodes and the energy consumed by the CH itself as follows: Where E CH and E CM are the CH's energy consumption and cluster members' energy consumption respectively.The CH receives all the packets from the members and fuses it into a single packet and forwards to the BS.Considering ρ as the nodes density and a circular region for the cluster with the coverage radius CRL, the energy consumed by the CH is as follows: Where d toBS is the total distance from the CH to the BS.If the network's width is named y and the distance of the BS to the closest edge of the network is called R 0 (see Fig. 2), we have: The cluster members' energy consumption can be derived as bellow: Generally speaking the expectation of nodes' distance to the CH must be applied in Eq. (8).In [5] it is derived as follows: Where in the last step it is assumed that the node's density, ρ, is independent of the physical positions.Inserting Eq. ( 6) and Eq. ( 8) into Eq.( 5) and the result into Eq.( 4), the total energy consumption of the i th level is: ) Where k is the number of clusters in that level: Also d toBS_i is derived as follows: In this equation there are two unknowns, E i level and CRL i .The energy balancing criterion in this article is the equality of levels' total energy consumption.Since the furthest nodes from the BS are supposed to die faster, we compute the minimum of the first level's total energy consumption and equal this to the other levels'.By this way we get rid of the unknown E i level . At this time considering i=1 in Eq. ( 10) we get an equation for the first level's total energy consumption as a function of CRL 1 .Having this equation minimized, the derivation with respect to CRL must be vanished.It is easy to show that Eq. ( 10) has minimum and the derivation has a single acceptable result in the feasible range for the last level.Using the obtained result, the minimum of the first level's total energy consumption can be reached and it is considered equal to the other level's total energy consumption.Therefore according to Eq. ( 10) E i level is known and only the CRL i , is unknown which can be computed easily for other levels.In Fig. 2, it is obvious that the clusters grow up by enclosing to the BS. B. Competitive Clustering Algorithm This section describes the CHs' selection and cluster formation phase.In this stage we refer to [4] for the competition phase among those tentatively selected nodes.In this clustering scheme, the tentative nodes compete for being final CHs.These nodes become tentative with an equal probability T and they broadcast a competitive message in their competition radius which is assumed to be fixed in EECS.This message contains the node's ID, its residual energy and its competitive range, R competition .If there is any other tentative node in this radius it would hear this message.Any node that finds itself more powerful than the others in its competition radius will introduce itself as a CH and broadcast its advertisement message. The main difference of the proposed algorithm and EECS is in the competition radius which was considered to be fixed in EECS.In this algorithm we consider that every tentative node competes in its respective level's CRL.As we can see the competition range of each level increases as the region's distance from the BS decreases.This means the smaller clusters with fewer nodes joining them is expected to be formed in far distances to BS. Details of Second Algorithm This section describes the CHs' selection and cluster formation phase.In our purposed clustering scheme, we assumed that the nodes, whose residual energies are more than the average network's energy, are more proper to become a CH at each round.For this purpose, we need to take the advantage of the hello message that is broadcasted from the BS at the beginning of every round for nodes synchronization.This hello message contains the average network's energy that is computed in the BS.Remember that we assumed that every node is supposed to send its residual energy to its respective CH, and CH is supposed to embed the average cluster's energy in the header of the aggregated data packet and forward it to the BS.The next step is the competition stage where the tentative nodes compete for final CHs.In the previous proposed algorithm, EECS, the nodes has to broadcast a competitive message in their competition radius and if there is any other tentative node in this radius would hear this message.Any node that finds itself more powerful than the others in its competition radius will introduce itself as a CH and broadcast its advertisement message.The message complexity in this algorithm makes trouble in the dense networks for having too many nodes competing and negotiating too many packets for being CH. We omitted the negotiation in the competition phase and replaced it with a timing schedule for advertising nodes.In this algorithm each tentative node will broadcast its advertisement message across the network based on a timing schedule as bellow: Where E initial and E residual are the initial and residual energy of the volunteer node respectively and t 0, is the maximum waiting time for advertisement and is a predefined parameter.It is obvious that the node with more residual energy will advertise itself sooner. In the advertising packet the node's ID, its residual energy and competitive range, R competition are embedded.Suppose that S i is a tentative node that is a volunteer to become a CH.The goal is to remove any tentative node S j (j≠i) that is in S i 's competition set and its energy is less than S i 's residual energy.If S j hears the advertisement and it determines that it has less residual energy than S i , it stops waiting for advertising itself and quits the competition immediately and goes to sleep mode like other ordinary nodes.Otherwise it waits for its turn to advertise and starts advertising if there is no any heard advertising message.The low message complexity in this approach is obvious since no extra negotiation is needed between the competing nodes.In the simulation section we will demonstrate the effectiveness of using timing schedule on the network's lifetime. In the first step we consider a fixed competition range just like EECS.Then we refer to [8] on deriving a competitive range as a function of distance to BS for a divided network into equal-length radial levels with the BS at the center as the following: Where according to [8] R i is the competition range of the CH in the i th level.The parameter α and d i are the region increment and the region distance to the BS respectively and the parameter k is a constant dependent to the network's primarily settings.By this equation, the competition range of each CH is derived based on its distance to the BS and the obtained competition range for the previous level.As we can see the competition range of each level decreases as the region's distance from the BS increases.This means the smaller clusters with fewer nodes joining them is expected to be formed in far distances to BS.Therefore, less energy for receiving data is consumed and CH can save majority of its energy for data transmission to the BS. We evaluated the network's efficiency in the sense of lifetime using a variable competition range and a timing schedule together.The results are brought with details in the next section. SIMULATION AND RESULTS The performance of proposed algorithms is evaluated via MATLAB. A. EECS-M Evaluation In this section, EECS-M is evaluated.The simulated network has the characteristics described in section 2. Other simulation parameters are listed in Table .1.The simulations are performed in fair situations and the results are compared with the EECS algorithm. In Fig. 3 the network's lifetime is depicted in two scenarios in the sense of the number of alive nodes.The EECS-M shows longer lifetime with respect to EECS.More fairness in load distribution is obtained in our proposed algorithm by using this variable competition range (variable cluster sizes) for each level.Measuring the total network' energy consumption is useful to compare the three cases for the whole lifetime (Fig. 4).The lower steep in the figure shows more fairness in energy distribution.It implies that the nodes consume their energy much slower than other case.EECS-M shows more efficient energy consumption than EECS in the both scenarios.This is due to using unequal clusters based on their distances to the BS. Another useful parameter for efficiency evaluation is the total number of data packets received at the BS.It is assumed that each CH forwards a single packet to the BS at each round of network's lifetime.Then one may guess this parameter is only related to the number of clusters formed in the network.However, the total number of packets received at the BS is also related to the nodes average lifetime.If there are too many clusters formed in the network without considering their distance to the BS, the energy of the CH nodes would drain so quickly.Even if they can forward many packets to the BS, they will not last long enough to do so.Fig. 5 illustrates the total number of data packets received in each round of network lifetime.It is obviously clear that the proposed clustering algorithm delivers more data packets in both scenarios.Useful numerical results are brought in table.2. The number of levels formed using Eq. ( 12) is three levels for 400 nodes and four levels for 800 nodes.In both cases the FND (first node's death) in EECS-M is sooner than EECS.However the LND (last node's death) is quite longer in EECS-M with respect to EECS.The TNSP (total number of sent packets to the BS) values are interestingly larger in both cases for EECS-M that is5.1 million packets against 1.8 million in scenario 1 and so on. B. The Second Algorithm Evaluation In this section, the second algorithm is evaluated.The simulated network has the characteristics described in section 2. Other simulation parameters are listed in Table .3. In the figures, VCR stands for "variable competition range" and TS stands for "Timing schedule". We simulated the network for two different cases.In the first step we apply the variable competition range derived via the recursive equation as ( 14) with the R 0 equal to 25 on the proposed algorithm EECS.Then we employ the timing schedule and the variable competition range together.In both cases we evaluated the networks lifetime in the sense on first/last node death.The results are depicted in Fig. 6 compared with EECS with no timing schedule and a fixed competition range.We also employed the cost function described in EECS choosing the more proper CHs for each non-CH node in all the cases. In Fig. 6 it is obvious that the timing schedule improves the efficiency in the sense of lifetime.The EECS algorithm uses a competition phase at the beginning of each round.The number of packets that has to be negotiated in this phase is quite high.Thus the nodes lose their energy at the beginning of each round.Applying the variable competition range results in an improvement in the lifetime.However this is not the case.Employing the time schedule and the variable competition range together in the algorithm shows a better improvement in network's lifetime.This is due to the better energy balancing and more fairness in load distribution by using this variable competition range.Nevertheless the timing schedule makes better energy efficiency in the setup phase of the algorithm.In Fig. 7 and Fig. 8 the same result is illustrated with the focus on the time of the first and last node's death in the three algorithms respectively.The improvement in the lifetime is more visible in these figures.Measuring the total network' energy consumption is useful to compare the three cases for the whole lifetime (Fig. 9).The lower steep in the figure shows more fairness in energy distribution and longer lifetime.In this figure the competitive algorithm with timing and variable competition range shows a lower steep in decreasing the total energy of the network.It implies that the nodes consume their energy much slower than two other cases.Considerable amount of energy is consumed in EECS for competition setup phase.In the next step, the network is simulated for 100 independent distributions of sensor nodes.The simulation is done in fair situations for our model employing a variable/fixed competition range on the EECS algorithm and also using the timing schedule and the variable range together.A fixed competition range R opt =25 that depends on the optimum number of clusters in the network is proposed for EECS. Figure 10 illustrates the result of network's lifetime comparison in the sense of first nodes death for 100 independent simulations.It is clear that 1.5% is achieved in applying a variable competition range on EECS.However the improvement in lifetime becomes 10% in using the timing schedule and the variable competition range together in the proposed algorithm. CONCLUSION In this paper, we reviewed the energy balancing technique in wireless sensor networks via clustering.Energy consumption of each CH in such a single hop network depends on its distance to BS and the number of cluster members.In the first proposed algorithm, we provide an extension of a well-known clustering protocol that uses a competitive algorithm to form clusters.In contrast to EECS, EECS-M used unequal competition ranges of tentative CH nodes, which are derive as a functions of node's distance to BS.In the proposed algorithm we initially found the optimum length for the first (furthest) level in the network.The reason was that these nodes are supposed to die sooner than any other nodes closer to the BS.Then assuming equal energy consumption for every subsequent level, we found the optimum length of that level and also the number of clusters.The main difference between EECS-M and EECS is that EECS uses a fixed competitive range for tentative nodes.However, EECS-M employs the levels' total energy consumption to find the competition ranges.The proposed algorithm EECS-M prolongs the network lifetime due to more energy efficiency obtained from using the unequal clusters.The acceptable results such as longer lifetime and larger amount of received data packets in the whole network lifetime prove the effectiveness of this proposed algorithm in the sense of energy efficiency. In the second proposed algorithm, we again provide an extension of EECS and our previous proposed algorithm in [8] that uses a competitive algorithm for determining competition range.In contrast to EECS, our model used unequal competition ranges of tentative CH nodes, which are recursively functions of node's distance to BS.This selection forms unequal cluster size in the whole network.Every CH node that is far away from the BS would form a smaller cluster with fewer members and can save most of its energy for data transmission.In the second proposed algorithm, timing schedule based advertising instead of the negotiation between the volunteer nodes is applied.This schedule was a function of nodes energy.The node with more residual energy would advertise itself sooner.Any node that heard the advertisement and it determined that it had had less residual energy would stop waiting for advertising itself and it would quit the competition.Otherwise it would wait for its turn to advertise and it started advertising if there have not been any heard advertising messages.We simulated the network for the ordinary algorithm EECS and our proposed timing based algorithm with/without a fixed competition range.In the first step we applied the timing schedule on the proposed algorithm EECS with the fixed competition range.Then we employed the variable competition range derived via the recursive equation as Eq. ( 14).In both cases we evaluated the networks lifetime in the sense on first/last node death.Employing the time schedule and the variable competition range together in the algorithm shows a better improvement in network's lifetime.This is due to the better energy balancing and more fairness in load distribution and low energy drainage in the setup phase for competition negotiation.Besides the competitive algorithm with timing and variable competition range shows a lower steep in decreasing the total energy of the network.It implies that the nodes consume their energy much slower than two other cases since considerable amount of energy is consumed in EECS for competition setup phase. Fig. 1 : Fig. 1: Radial regions (levels) with the BS at the center.
7,271.6
2010-05-26T00:00:00.000
[ "Computer Science" ]
A Learning Approach for Adaptive Image Segmentation As mentioned in many papers, a lot of key parameters of image segmentation algorithms are manually tuned by de- signers. This induces a lack of flexibility of the segmentation step in many vision systems. By a dynamic control of these parameters, results of this crucial step could be drastically improved. We propose a scheme to automatically select segmentation algorithm and tune theirs key parameters thanks to a preliminary supervised learning stage. This paper details this learning approach which is composed by three steps: (1) optimal parameters extraction, (2) algorithm selection learning, and (3) generalization of parametrization learning. The major contribution is twofold: segmentation is adapted to the image to segment, and in the same time, this scheme can be used as a generic framework, independant of any application domain. Introduction Image segmentation is a low-level task that consists on partitionning the image into homogeneous regions distincts from each other, according to some criteria. It is a crucial step in computer vision systems involving image processing (e.g. object recognition, content-based image retrieval) where the challenge is to perform an image segmentation with some semantic meaning. Although promising results are presented in many papers, genericity is still not proven. In fact, many of these approaches suffer from a subjective tuning of key parameters. This problem also occurs in many vision systems where segmentation stage is narrowly tuned according to the application domain specificities by a human expert in image processing. In order to cope with this lack of flexibilty, we propose an approach to automatic and adaptive segmentation based on learning optimal algorithm selection and key parameters tuning. We do not aim at building a new algorithm but rather add a control scheme to existing ones. The underlying idea is that we think that a segmentation process must be more directed by its goal than by the data. What are we expecting from a segmentation algorithm ? (1) It must be flexible enough to be ported from a domain to another one and (2) it must be adapted and well-tuned to the segmentation task. As Draper said in [8], we need to avoid relying on heuristically selected, domain specific features and methods, like ad-hoc algorithms and decision rules. Program supervision techniques proved to be good candidates to control image processing programs [25,5]. Such systems propose general architectures for planning, executing, evaluating and repairing image processing programs. But, as explained in [7], one negative point of these frameworks is that a lot of knowledge has to be provided in order to perform good parametrization. We aim at extending this approach by integrating learning at each step of the framework so as to have more dynamic and generic systems. This paper is organized as follows. Section 2 gives a quick overview of key issues of existing image segmentation methods. Section 3 first presents an overview of the proposed approach then explains how knowledge on algorithm selection and parameters tuning is learned. Section 4 presents experimental results based on the proposed methodology applied to outdoor scenes. Finally, a conclusion and a discussion of future work are given in section 5. Related Work Over the last four decades, an increasing number of segmentation algorithms have been developed. In the first times, most of the efforts were devoted to build algorithms based on low-level pixel cues as color, edges and texture. That makes them universally applicable but often leads to poor meaningful segmentation. Few of them were combined in a cooperative framework [24,16] in order to avoid the weakness of each. However, the inability to specify how homogeneous a region should be causes the algorithm to fail. Thus, the challenge of achieving more perceptual oriented segmentation has motivated researchers to develop models for extracting, grouping and classifying more perceptual cues [18,19,10,4]. Recent works [21,2,23,12,11,3,14] addressing these purposes apply learning techniques to capture models characteristics. In this section, we describe three approaches devoted to produce perceptual segmentation by using various learning techniques: (1) algorithm parameters learning by synthetic object model matching, (2) object-class model learning by example, and (3) supervised parameters learning for perceptual segmentation of complex scenes. In [21], Peng proposes a model-based multi-stage recognition system using reinforcement learning. In this paper, segmentation algorithm parameters and feature extraction algorithm parameters are trained to obtain the maximum model-matching confidence. However, the system is fully dependent on the object model (here a polygonal approximation of a sideway car) and cannot be considered in situations where objects are harder to model like natural objects from different points of view, scales, and so on. In [3], a figure-ground learning scheme for class-based segmentation is described. It combines top-down and bottom-up segmentation processes to, respectively extract image class-relevant fragments, and thereafter to obtain more accurate object boundaries. Good results are presented for simple object-class like sideways horses or cars. The main drawback of the system is its sensitivity to regions variability. As mentioned by the authors, it relies on two main criteria. We observe that each of these criteria hides a key parameter, which is manually tuned from experience. In [12], the authors present a method for figure-ground segmentation of objects in difficult real-world scenes (cars and cows) using a probalistic formulation to integrate learned knowledge about the recognized category with the supporting information on image. The main advantage of this work is that neither manual segmented image nor classobject models are needed during the learning process, excepted a codebook of local appearance of object category. However, the codebook grows proportionnaly to the complexity of object to extract. Even if this knowledge is easily available for cars or cows, this task is more difficult for natural objects. In [4], Chen combines spatially adaptive texture features and local color composition features to perform robust and precise perceptual segmentation of complex scenes. As explained by the authors, several key parameters are determined by subjective preliminar tests, like threshold for smooth/nonsmooth texture classification and threshold for color composition feature similarity. Choice of these parameters could be assimilated to a manual learning stage. Finally, we denote two main drawbacks in existing proposed methods for image segmentation learning. First, object-class model learning by examples approaches are limited in its applications: complex objects need too much knowledge to be easily modelizable (specially for the enduser). Secondly, perceptual segmentation approaches are still not able to dynamically adapt its parameters to all situations. An intermediate solution, which doesn't ask too much knowledge to the end-user (i.e. choice of segmentation algorithms and of theirs parameters) has to be found. Proposed Approach In the approach proposed in this paper, we avoid giving explicit models of the object to extract and hand-choosen parameters, because it implies too much knowledge and restricts the application domain. Because segmentation is an ill-defined problem, we argue that no generic segmentation algorithm can be found. A way to perform automatic meaningful segmentations is to be able to select best adapted and well-tuned algorithms according to a set of manually segmented examples. This scheme can be easily applied by end-users, non experts in image processing. Overview This approach has two main phases: a segmentation learning phase and an automatic segmentation phase. The learning phase is subdivided into three stages (see figure 1): (1) optimal algorithm parameter extraction, (2) construction of a case base which contains processed cases. Each entry of this base is related to features describing an image with the corresponding optimal algorithm parameters and (3) algorithm selection learning. The automatic phase uses this knowledge for automatic and adaptive segmentation (see figure 2). Features are given in input of the algorithm selection predictor trained in previous stage (1). Then, similarity is determined by looking up the case base for similar cases (2). When the closest one is found, image is segmented with corresponding optimal parameters. Learning Phase The goal of this stage is to extract optimal algorithm parameters (see figure 3), to build the case base and to train a predictor for selection of algorithm (see figure 5). Optimal Algorithm Parameter Extraction From experience, in many segmentation algorithms, we have been able to come up with key parameters that reduce the complexity of the search space for the user and make it simple to achieve a reasonable segmentation while only modifying one or two parameters. The goal of this step is to automatically tune such key parameters for the considered images to segment. The only provided knowledge on algorithms is the key parameters and some constraints on theirs scales of values (e.g. minimum and maximum). Others paremeters are set by default. We pose the optimal algorithm parameter extraction as an optimization procedure. The purpose of an optimization procedure is to find a set of parameter values for which an objective function gives the best maximum/minimum measure values. This objective function is based on a measure of goodness/discrepancy, called performance metric. A large variety of performance metrics have been proposed for evaluating segmentation results [26]. In this paper, we use a supervised evaluation method (also known as empirical discrepancy method) which requires beforehand to generate manually reference segmented images. In that way, we can directly evaluate segmentation within a perceptual ground truth and thus, optimize algorithm parameters for perceptual segmentation, as far as possible. But this job is also subjective and time-consumming, especially for complex natural images. Our performance metric is area-based. It captures deficiencies such as inaccurate boundary localization, oversegmentation, and under-segmentation. First, each region of the segmented image is associated with a region of the reference segmented image on the basis of region overlapping. By this way, we obtain three sets of region pairs: a set of identified region pairs, a set of non-associated regions of the segmented image with a region of the reference segmented image (over-segmented regions) and a set of nonassociated regions of the reference segmented image with a region of the segmented image (under-segmented regions). For the inaccurate boundary localization error measure, a weighted sum of misclassified pixels for identified region pairs is computed. Similar calculation is applied to each region pair of the two others sets. So, the final output is a weighted sum of misclassified pixels, indicating how well the segmentation masks correspond to the reference ones. The smaller output value is, the better is the segmentation quality. Note that value zero is achieved when segmentation result and reference fit exactly. More details on this evaluation metric can be found in [15]. Let i be an image of the training dataset I, G i be its ground truth (manual segmentation), A be a segmentation algorithm of the library of segmentation algorithms A and p A a vector of parameters for the algorithm A. The result R A i of the segmentation of i with algorithm A is defined as where R is a set of regions. The goal is to obtain R A i as closed to G i as possible. The performance evaluation of this result is noted where ρ is the performance metric and E A i a scalar. The purpose of the optimization procedure is to find a set of parameter values p A i which minimizes E A i : Because ρ has no explicit mathematical form and is nondifferentiable, standard powerful optimization techniques like Newton-based and quasi-Newton methods cannot be applied effectively. General methods suitable for such a problem are usually called direct search method [9]. Here, we use a modified simplex search technique 1 introduced by Nelder and Mead [17]. This optimization procedure has many advantages: first, simplex technique is appropriate for optimizing several algorithm parameters at the same time. Then, the used performance metric allows algorithms performance scores to be objectively ranked. A third aspect we have experimented is the possibility to constraint the criterion measures to be more sensitive to some regions of interest: error measures for boundary localization, under-segmentation and oversegmentation can be weighted differently to take more into account a highlighted region. By this way, parameters will be specifically optimized for a better segmentation of this region. Figure 4. Example of optimal parameters extraction. From left to right and from top to bottom: input image, manual segmentation, segmentation with default parameters, segmentation after optimal parameters extraction. This optimization is performed for each image i ∈ I and for each algorithm A ∈ A. The output of this stage is a set of vectors p A i with associated E A i (one parameter vector per algorithm and per image). Case Base Construction The first step (1) consists on ranking optimization results from the previous stage. For each image i, according to the smallest value of E A i , the best algorithm is associated to the image and is denoted A i . In parallel (2), a vector of features F i is extracted. We use color distribution descriptors (color coherence vectors [20]), texture descriptors (steerable oriented gaussian derivatives features [1]) and some global statistic descriptors (global entropy, energy and variance) to construct feature vectors. Then, a new case Algorithm Selection Learning When all images of I are processed (i.e. when the case base is entirely constructed), the predictor is trained with a neural network (a multi-layer perceptron). This network takes as input a vector of features F i and its output is the identifier of the associated algorithm A i (4). The output of this stage The main difficulty of this stage is to train the neural network with only relevant features. For this challenge, two solutions are conceivable: first, intrinsic knowledge on the segmentation algorithm enables an heuristic selection of features. For example, a threshold-based algorithm is sentitive to the gray-level value of pixels. Hence, a relevant feature will be simply an histogram. But the relationship between algorithms and features cannot be always readily established, especially for complex algorithms with many parameters. Second, we can extract a broad set of general global features and then, reduce it to a more relevant subset with a PCA. This is the solution we have adopted. For the presented results, the dimensionality of the computed vector is 209 features. PCA reduces it to 66 features. Automatic Segmentation Phase The automatic phase aims at using knowledge learned from learning phase for best adapted segmentation of new images (see figure 2). The case base can be decomposed into subsets. Let consider that s A is the subset of cases {c A i } i∈I where algorithm A has performed the best segmentation. For each new test image j of the test dataset J , a feature vector F j is first computed then reduced by PCA. This vector is used as input to the algorithm selection predictor. An algorithm A j ∈ A is selected. The optimal parametrization p A j for A j is defined as: where dist(F i , F j ) is the euclidean distance between F i and F j . Experimental Results We have experimented our approach on an image database composed of 140 samples images of aircrafts in outdoor scenes. Images are very heterogeneous: some of them have homogeneous background, others are strongly contrasted or have complex object of interest and background structures. This dataset is randomly divided into 67 training images and 73 testing images. Currently, three candidate image segmentation algorithms compose the library: a meanshift segmentation algorithm [6], a region growing algorithm, and an inherently parallel hierarchical color segmentation algorithm [22]. The meanshift algorithm has three key parameters to tune: the maximum neighbour color distance parameter which controls the region merging, the range radius of the mean shift sphere (relative to the first parameter) and the spatial radius of the mean shift sphere which controls the smoothing of the region boundaries. The region growing key parameter is a threshold relative to the gradient image. Four seeds at each corner are also defined for the starting points. The third algorithm has also one key parameter that defines the smallest allowed euclidian distance between two similar rgb color vectors. Parameters of the neural network are: two layers with a sigmoid activation function, 66 hidden units (number of features), conjuguate gradients training method and maximal number of epochs of 800 (number of presentations of the entire training set). Table 1 presents first results of the automatic phase. It can be seen, for the presented examples, that the system achieves good algorithm selection. For the first image, the system has selected the color segmentation algorithm. The segmentation is quite good since the different perceptual regions related to the sky, the plane and the ground are well separated. In contrary, region growing and meanshift algorithms have merged too many regions. For the second image, the homogeneous background has inducted the system to select the region growing algorithm which is visually the best choice. Meanshift and color segmentation have produced very close results for the third and the fourth image. Conclusion This paper presents a method for learning how to perform adaptive image segmentation. This learning approach is structured in three main stages: a parameter optimization stage, a construction of a case base stage and an algorithm selection learning stage. The first stage consists of original images meanshift [6] region growing color segmentation [22] extracting optimal algorithm parameters for a training image dataset. We pose this optimal parameters extraction as an optimization problem. A performance metric based on region segmentation accuracy criterions is used for evaluating segmentation result within ground truth (manual segmentation). This performance metric is considered as an objective function to be minimized. Nelder-Mead Simplex method is then used to solve the optimization problem. The result is a set of optimal parameters and an objective evaluation value for each algorithm and for each image of the training dataset. The second stage consists of learning the optimal algorithm selection and then of case base construction. From the results of the previous stage, algorithms are ranked and relevant features are extracted. For each training image, a new case, composed of a vector of image features, choosen algorithm and optimal parametrization of this algorithm is stored in the case base. The third stage make use of this stored knowledge to train a MLP neural network for algorithm selection. Final segmentation performance is limited by algorithm individual performance and by the size of the learning dataset. In order to fully validate our approach, we have to test it and to evaluate it on large image databases from various domains and to expand the algorithm library. This is our current work. The main drawback of our approach is the difficulty to draw manual segmentation of complex images during the learning phase. Human manual segmentation cannot be reproduced exactly by a segmentation algorithm. In order to fill this gap, we have to reduce the weight of manual segmentation in the learning process. More simplified ground truths (like figure ground segmentation) coupled with multiscales approach could be used. We also want to use more a priori knowledge on the objects to extract. It can be knowledge for optimal merging of region of interests. Another way is to guide the segmentation task according to a visual concept based description [13].
4,435.2
2006-01-04T00:00:00.000
[ "Computer Science" ]
Fugl-Meyer hand motor imagination recognition for brain–computer interfaces using only fNIRS As a relatively new physiological signal of brain, functional near-infrared spectroscopy (fNIRS) is being used more and more in brain–computer interface field, especially in the task of motor imagery. However, the classification accuracy based on this signal is relatively low. To improve the accuracy of classification, this paper proposes a new experimental paradigm and only uses fNIRS signals to complete the classification task of six subjects. Notably, the experiment is carried out in a non-laboratory environment, and movements of motion imagination are properly designed. And when the subjects are imagining the motions, they are also subvocalizing the movements to prevent distraction. Therefore, according to the motor area theory of the cerebral cortex, the positions of the fNIRS probes have been slightly adjusted compared with other methods. Next, the signals are classified by nine classification methods, and the different features and classification methods are compared. The results show that under this new experimental paradigm, the classification accuracy of 89.12% and 88.47% can be achieved using the support vector machine method and the random forest method, respectively, which shows that the paradigm is effective. Finally, by selecting five channels with the largest variance after empirical mode decomposition of the original signal, similar classification results can be achieved. Introduction In daily life, the function of the upper limbs accounts for 60% of the total body function, while the function of fingers accounts for 90% of the function of the upper limbs [1]. Complete hand function plays a very important role in people's work and daily life. However, some patients, such as hand muscle weakness, hand paralysis, hand sequelae after stroke, and even cut-off patients, lose or partially lose the hand function. Therefore, it is suitable to use the brain-computer interfaces (BCI) technology to drive the exoskeleton or the prosthetic hand to compensate for their hand function. BCI systems have been widely used in the last decades as a communication medium between the human and external devices, particularly those people with movement issues [2]. A typical BCI system allows a person to interact with the environment without involving the peripheral nervous system or muscles, using only brain activity [3]. Motor imagery (MI) is one kind of BCI, which detects the brain's sensorimotor cortex activation to identify a person's motor intent. Sensory homunculus shows that the cortical areas that control human hands account for the largest proportion of the total cortical areas, and a large proportion indicates that the movements can be controlled better. Therefore, it is possible to perform the hand MI with the BCI technology. To take advantage of brain activity, BCI systems require communication signals. Functional near-infrared spectroscopy (fNIRS) is a relatively new BCI signal with some favorable properties such as the high temporal resolution, spatial resolution, portability, and ease of wearing. There have been some papers regarding the application of fNIRS in the field of BCI, especially in the field of MI. Bhutta et al. classified the fNIRS data for deception decod-ing, using the methods of linear discriminant analysis (LDA) and support vector machine (SVM), and the average classification accuracies of these two methods were 78.34% and 87.33%, respectively [4]. Yin et al. utilized fNIRS to classify the imaginary clench force and clench speed, and each movement is divided into three levels. The average classification accuracy using fNIRS signal alone was 76±5% [5]. In Jiao's fNIRS-BCI study, the classification accuracy of finger percussion was 88.66% [6]. With the help of SVM, Neethu et al. studied the real-time dichotomy between the executions of the left hand and the right hand, and the imaginations of the left hand and the right hand. The accuracies were 63% and 80%, respectively [7]. Abtahi [8] and Abibullaev [9] collected fNIRS to complete the MI tasks of upper limbs and hands, respectively, and used SVM to classify them. The classification accuracy was more than 90%. Zhu [10], Peng [11], and Ghafoor [12] also only acquired fNIRS to complete the classification task of MI, and the classification method was LDA, with the accuracy rates of 87.8%, 70.43%, and 77.14%, respectively. By collecting fNIRS signal, Wang et al. classified the grasping motion imagination of the right hand. The classification accuracy was 80.21 ± 6.7%, and the classification method was SVM [13]. One common fact of the above papers is that only the fNIRS signal is used for MI. In addition to simply using fNIRS for MI, there have been other studies that have used a mix of fNIRS and electroencephalography (EEG) signals. For example, Yin et al. used the fNIRS-EEG to perform motion imagery tasks for the hand speed and force, with classification accuracy of 89 ± 2% [5]. Kaiser et al. completed a 2-class (right hand and feet) MI-based BCI task in 15 subjects. Using the LDA classifier, the accuracy was 89 ± 6% [14]. Yvonne et al. acquired EEG-fNIRS signals to classify MI and motor execution, the accuracy was 87% [15]. Zhu et al. gathered EEG-fNIRS to complete the classification task of hand MI, and the classification accuracies of SVM and LDA were 86% and 84.92%, respectively [16]. Fu et al. used the same experimental paradigm as the one presented in [5] and used a mix of EEG and fNIRS signals for classification. With the help of SVM, the accuracy was 74 ± 2% [17]. Although the EEG signal is noninvasive and has a high temporal resolution, the EEG signal is also known for its low spatial resolution, low signal strength, and easy interference by strong electrical noises [18,19]. The summary of the recent literature is given in Table 1. It can be seen that there are only few types of MI tasks (two, three, or four types). In addition, no comparison of different classification methods has been made for MI tasks, and there is no in-depth study of physiological signals. For this reason, this study attempted to classify new MI tasks and analyze the fNIRS by the method of empirical modal decomposition (EMD). Compared with other papers, the improvements of the results presented in this paper are as follows: 1. In the designed experiment, five kinds of MI tasks of hand are completed, and each task contains four levels. The purpose of the experiment is to classify the four levels of each action only using fNIRS signals obtained from the motor areas and other corresponding regions of the brain. To the best of the authors' knowledge, the experiment of motion imagination on this scale was not carried out before. And the actions designed in this experiment belong to the Fugl-Meyer assessment scale, and can be used as the basic BCI to control rehabilitation robots. This lays the foundation for the future use of the BCI technology to drive the exoskeleton manipulator or prosthetic hand for complex hand movement training. 2. In this study, nine classification methods and nine features are used to determine how to combine the classification method and features can lead to a satisfactory solution for the designed MI classification task. Experiments have shown a competitive classification accuracy. 3. To further reduce the number of fNIRS channels, the method of EMD is used to decompose the original fNIRS signal into several sub-modes. By calculating the maximum variance of each fNIRS signal sub-mode, the brain regions with the highest correlation with motor imagery task are identified, and we can just use the fNIRS signals of these regions to complete MI tasks. The experiment results show that the classification accuracy in this optimized setting is very close to that with all fNIRS channels. This paper is structured with the following sections: the section "Materials and methods" describes the instrumentation, experiment paradigm, fNIRS probe position, experiment procedure, data processing, feature extraction, and EMD. The section "Results" presents experimental details regarding the fNIRS collection and the classification results of the designed experiment paradigm. The section "Conclusion and discussion" discusses the obtained results and concludes the entire paper. Subjects In this experiment, six healthy volunteers [one woman and five men of age 33.3 ± 4.7 (mean ± SD), all righthanded] took part in the test. They are all healthy, have no mental diseases or history of psychological disorders, and have no experience about the test of BCI. They have been informed of the test in details, and have been given a short warm-up test before the formal test. This human subject study is approved by the ethics committee of Institute of Automation, Chinese Academy of Instrumentation In this study, tests are conducted by the Brite 24 (Artinis, Netherlands) to collect fNIRS. "Brite 24 " contains 24 channels and 18 optrodes (10 transmitters and 8 receivers). The receiver-transmitter distance is 3 cm and the sampling frequency is set as 25 Hz. To make the measurement more accurate, the device is equipped with three types of electrode caps: large, medium, and small, which can be selected according to the different size of the subject's head. Experiment paradigm This experiment is carried out in a non-laboratory environment. Five different hand MI tasks are designed which are essential components of daily life, including the hand's group flexion and extension (GFE), hooklike grasping (HG), digital opposition (DO), cylindroids grasp (CG), and spherical grasp (SG). GFE and HG tasks are further divided into 4 levels [0%, 30%, 60%, and 100% of the maximum hand motion range (MHMR)]; DO, CG, and SG tasks are also divided into 4 levels [0%, 30%, 60%, and 100% of the maximum hand grasp force (MHGF)]. It is worth noting that 0% is actually a relaxed state, which can be used in all five MI actions. Figure 1 shows the timing diagram of a single trial for MI task. The timing diagram is generated by E-Prime 2 (Psychology Software Tools, Inc., Sharpsburg, KY, USA). In Fig. 1, the bottom of the panel exhibits the MI task of one entire trial, consisting of four parts: during the baseline interval (BI), a red circle is displayed on the screen and the subject sees it for 20 s, keeping relaxed and motionless; during the ready interval (RI), a yellow circle is raised on the screen to remind the subject of preparing for MI tasks; during the task cue interval (TCI), task pictures are displayed on the screen to remind the subject of what to image; during the task interval (TI), a green circle is presented on the screen and the subject performs the corresponding MI task for 20 s. In addition, when the subject imagines the movements, he/she is also asked to subvocalize the corresponding action to prevent distraction. A single trial costs 44 s, and each level of imaging task contains 30 trails. The middle of top panel of Fig. 1 shows five different MI tasks, where the left two pictures indicate the GFE and HG MI tasks and the far left picture displays 0%, 30%, 60%, and 100% of MHMR (the period of imaging hand opening and closing is about 6-8 s); the right three pictures indicate the DO, CG, and SG MI tasks and the far right picture shows 0%, 30%, 60%, and 100% of MHGF. The subject is required to complete the target force of the hand grasp within 2 s and then keep the force constantly for the following 18 s. Probe deployment of Brite 24 The brain regions associated with the hand motor mainly include: the primary motor cortex, premotor cortex, and sensorimotor area. Because the subject is also asked to subvocalize the required MI task, the Broca's area is also considered. As shown in Fig. 2, the probe positions of the fNIRS detection device are: BA6 region [premotor cortex (PMC)], BA4 region [motor cortex (M1), sensorimotor cortex (S1)], and Broca's area. The yellow dots are the transmission terminals and the blue dots are the receiving terminals. The shortest distance between the yellow point and the blue point is 3 cm. The device has a total of 16 terminals, comprising 24 channels. Training Before the formal data collection, all subjects are taught about the process of the experiment many times until they can retell it fluently. For GFE and HG, they are asked to stretch their four fingers to the maximum extend as they can for about 30 s to remember how their hand muscles feel. Then, using this feeling as a reference point, they should stretch or hook their hand to 60%, 30%, and 0% of MHMR, and also remember how their hand muscles feel at a specific level. For DO, CG, and SG, the MHGF is determined by the average of three maximal forces: digital opposition, cylindroid grasp, and spherical grasp with the hand dynamometer. The subject takes a break every 3 min. After the completion of the MHGF task, the subject is required to do DO, CG, and SG with the hand dynamometer, and the target is 60%, 30%, and 0% of MHGF, which lasts for 2 s, and then keep hands still for 10 s. Each level of training involves ten tests to build the muscle memory. Experiment procedure The experiment procedures are shown in Fig. 3. The subject sits in a comfortable armchair with their arms resting naturally on the table. There is a desktop screen/iPad screen on the table which is 0.8-1.2 m away from the subject. The subject is asked to start a trial, and then to execute 30 trails for every task. One trial lasts for 44 s and the subject is required to avoid any body movements or frequently eye blinking. The trails (different MI tasks) in this study are not presented in a randomized sequence. The order of actions is the same among the subjects. This setting can reduce the number of movement repetition errors in comparison to randomized protocols [20]. There are five actions in the experiment, and each action includes four levels, and each level needs to be completed for 30 times. The subjects take a 5-min break to regain their To ensure the consistency of the measurement status when wearing the optical cap, the wearing process of the optical cap is set as follows: first, the central point CZ of the human brain is found, which is used to fix the position of the CZ point of the electrode cap; second, the midline of the electrode cap is coincident with the line between the patient's nose bridge and CZ point; finally, the location of the electrode is set in accordance with International Society of Electroencephalography 10-20 standards. This ensures that when the subject wears the cap again, the position of the electrode is the same as he/she first wears it. fNIRS signal collection The data collected by the instrument are concentration signals including: the oxygenated hemoglobin (Oxy-Hb), deoxygenated hemoglobin (Deoxy-Hb), and total of hemoglobin (THb). The sampling frequency is 25 Hz, and the cut-off frequencies of the band-pass filter are 0.01 Hz and 0.1 Hz, respectively [17]. Feature extraction In this paper, the sliding window method which is most commonly used for processing fNIRS signals is used for feature extraction. In the classification task, the selected timedomain features include: mean value (MV), slope factor (SF), mean absolute value (MAV), integrated absolute value (IAV), passing zero numbers (PZN), and passing mean numbers (PMN). The definitions of these features are listed as follows: where L is the length of time window, and in this test, it is 2 s; x(n) is fNIRS signal. Polyfit is the polynomial curve fitting a range of area. x(1) is the starting point of the area; x(L) is the end point of the area. SF is the slope factor of the polynomial curve. Num is the count number that satisfies the condition in the following bracket. Avg(x) is the average of x(n) in the time window length of L. The frequency-domain features used in this study contain instantaneous amplitude (IA), instantaneous phase (IP), and instantaneous frequency (IF). These features can be extracted by Hilbert transform (HT) [5,17]. The non-stationary signal x(t) is transformed into y(t) by Hilbert transform as follows: where P is the Cauchy principal value. Then, IP, IA, and IF can be, respectively, calculated by: Both the time-domain and frequency-domain features can be divided into two categories: one is to reflect the value of the curve in the time window, and the other is to reflect the change degree of the curve value in the time window. Classification Besides SVM and LDA, some classical machine learning classification methods are also tested in this study. They are: random forest (RF), quadratic discriminant analysis (QDA), k-nearest neighbor (KNN), decision tree (DT), feed forward neural networks (FFNN), naive Bayes (NB), and ensemble learning (EL). SVM is modified according to the method of bagging [21], whose kernel function is chosen as Gaussian radial basis function. The number of decision trees is set to be 500 in RF. In QDA and LDA, the types of discriminant functions are chosen as linear and quadratic. There are 4 neighbors in KNN, and the number of trees in DT is 100. Back propagation is applied in FFNN. Naive Bayes is based on the attribute conditional independence assumption. In EL, the method is set as AdaBoostM2 and the weak learners are chosen as DT based on ID3. The method of slide time window is used as follows: the length is set to be 2 s and the sliding distance is set to be 1 s. The proportion of the training set to the testing set is 3:2. Optimize fNIRS channels To utilize fewer fNIRS channels to complete the classification tasks, it is necessary to determine the relevant channels. EMD is such a signal decomposition method. It is the first step of Hilbert-Huang Transform, which acts as a dyadic filter bank [22]. After EMD, the original signal can be decomposed into intrinsic mode functions (IMFs). The decomposition results in a set of empirical mode functions and a residual term, which can represent the trend of the signal or a fixed value [23]. The mathematical formula of EMD is given as follows: where X (t) is the original data, IMF i (t) are intrinsic mode functions, and r n (t) is the residual term. fNIRS signals There are three types of signals collected in the experiment: Oxy-Hb, Deoxy-Hb, and THb. As shown in Fig. 4a, the red solid line is Oxy-Hb, the blue dashed line is Deoxy-Hb, and the green dotted line is THb. Figure 4 shows the fNIRS signal at channel 16 when the subject carries out three trials. The xaxis represents the sampling points, and one trial is composed of 1100 sampling points. Therefore, trial one is between 0 and 1100, trial two is between 1101 and 2200, and trial three is between 2201 and 3300. The y-label is the concentration of fNIRS. In Fig. 4a, the subject's MI movement is 30% of MHMR of GFE MI task. The positions of the yellow vertical lines represent the starting time of the RI and the black dotted lines represent the starting time of the TI. The areas between the black solid lines and the yellow vertical solid lines are BI processes, the areas between the yellow vertical solid lines and the black vertical dashed lines are the RI and TCI processes, and the regions between the black vertical dashed line and the black solid line are the TI processes. In Fig. 4a, when the signal curves cross the yellow line, the red solid curve rises quickly, the blue dashed curve changes a little bit, and the green dotted curve follows the red solid curve. The results show that the average amplitudes of the red solid curves and the green dotted curves in TI are higher than those in BI, while the amplitude of the blue dashed curve does not change too much. Therefore, the data of the red solid curve (Oxy-Hb) are selected for the subsequent analysis. Figure 4b shows different four color lines (red, blue, green, and pink) which, respectively, represent four levels of GFE (0%, 30%, 60%, and 100% of MHMR) based on Oxy-Hb. It can be seen from Fig. 4b that the pink curve changes dramatically. And then, the ranges of amplitude from top to bottom are green, blue, and red. This indicates that the intensity of signal change is proportional to the magnitude of imagined hand motor range. The other four MI types of fNIRS figures are similar to GFE, which are therefore omitted from this paper. Classification results With different features and different classification methods, the classification accuracy of GFE MI task is provided in Table 2. The accuracies greater than 80% are all marked in bold. Table 2 illustrates that features with the satisfactory accuracy are MV and IA, and the satisfactory classification methods are SVM, QDA, KNN, and RF. The results of the other four MI tasks are similar to the results of GFE, which are omitted here. Furthermore, MV and IA are combined together to check whether the classification accuracy can be improved or not. The results are given in Fig. 5. It can be seen that the classification accuracy is generally improved. In this case, the best classification methods are SVM and RF whose classification accuracies are 89.12% and 88.47%, respectively. fNIRS channel optimization results According to (11), IMFs of 24 channel signals are calculated, which are given in Fig. 6. The first curve in Fig. 6 is the original signal, and the curves from 2 to 11 are 10 IMFs decomposed from the original curve. The 5 channels with the largest IMF variance are given in Table 3. They are R7-T7, R8-T8, R3-T5, R5-T6, and R8-T10. R7-T7 correspond to the PMC area, and R8-T8 correspond to the M1 area. R3-T5 and R8-T10 correspond to the Broca's area, and R5-T6 corresponds to the C3 area. Then, signals from these 5 channels are used to fulfill the classification tasks. As seen from Fig. 7, with the IA and MV feature combination, the classification accuracy based on these five channels is relatively high, which is only slightly lower than that based on 24 channels. The results imply: 1. For the MI tasks designed in this paper, the classification accuracy using these 5 channels is comparable to that based on 24 channels. 2. For the MI tasks designed in this paper, the BA6 area is more sensitive than the BA4 area. 3. In this MI paradigm, the Broca's area is activated to a certain extent, because the subject is required to subvocalize specific MI tasks. Discussion and conclusion In the proposed experimental paradigm, only fNIRS signals are used. Therefore, there is no need to consider the synchronization between different physiological signals, which reduces the complexity of the system. In addition, it helps to collect more accurate BCI signal, because subjects may shake their heads during a long-term test (the EEG signal is easy to be interfered by the shaking head behavior, while the fNIRS is less affected). Although the designed MI tasks are relatively simple, they are the basis for other complex hand movements. In addition, these MI tasks belong to the typical movements of Fugl-Meyer assessment scale. A desirable recognition of these tasks may lay the foundation for BCIcontrolled hand rehabilitation robot. The tests are conducted in a non-laboratory setting and are performed by people who are familiar with the subjects, and an familiar environment makes the subjects behave more comfortably [24]. At the same time, a simple test proves that when subjects face a familiar environment, fNIRS signal fluc- tuations are relatively small, which is less likely to interfere with the normal fNIRS signals. With an appropriate feature selection process, it is recommended that for the proposed experimental paradigm, the combined feature (MV and IV features) can lead to the highest classification accuracy (89.12% by SVM and 88.47% by RF). Furthermore, the original signals are decomposed into IMFs by EMD, and the 5 channels with the largest variance could also be used to complete the classification task, and the classification accuracy is close to that of the complete channels. This suggests that these 5 channels can be used as principal components of 24 channels, and the functions of these 5 channels in the brain areas are consistent with the task of MI. On one hand, it can promote the understanding of the cerebral cortex. On the other hand, according to the motor imagery task, more accurate use of the fNIRS signals Table 3 Five channels with the largest variance in five MI tasks Task Channels GFE T7-R7 R8-T8 R3-T5 R5-T6 R8-T10 HG T7-R7 R8-T8 R3-T5 R5-T6 R8-T10 DO T8-R8 R7-T7 R3-T5 R5-T6 R8-T10 CG T8-R8 R7-T7 R3-T5 R5-T6 R8-T10 SG T8-R8 R7-T7 R3-T5 R5-T6 R8-T10 of the corresponding cortical regions of the brain can further optimize BCI and improve efficiency.
5,752.6
2021-01-11T00:00:00.000
[ "Computer Science", "Engineering" ]
Classification of Benign and Malignant Lung Nodules Based on Deep Convolutional Network Feature Extraction With the rapid development of detection technology, CT imaging technology has been widely used in the early clinical diagnosis of lung nodules. However, accurate assessment of the nature of the nodule remains a challenging task due to the subjective nature of the radiologist. With the increasing amount of publicly available lung image data, it has become possible to use convolutional neural networks for benign and malignant classification of lung nodules. However, as the network depth increases, network training methods based on gradient descent usually lead to gradient dispersion. Therefore, we propose a novel deep convolutional network approach to classify the benignity and malignancy of lung nodules. Firstly, we segmented, extracted, and performed zero-phase component analysis whitening on images of lung nodules. Then, a multilayer perceptron was introduced into the structure to construct a deep convolutional network. Finally, the minibatch stochastic gradient descent method with a momentum coefficient is used to fine-tune the deep convolutional network to avoid the gradient dispersion. The 750 lung nodules in the lung image database are used for experimental verification. Classification accuracy of the proposed method can reach 96.0%. The experimental results show that the proposed method can provide an objective and efficient aid to solve the problem of classifying benign and malignant lung nodules in medical images. Introduction Lung cancer is one of the most common cancers in the world. Compared with other cancers, there are no obvious symptoms at an early stage. Early detection of lung cancer in its nodular form, screening, classification, and medical management have been demonstrated to be extremely helpful and effective in decreasing lung cancer mortality [1]. erefore, how to effectively diagnose lung nodules has become a topic of primary concern. e detection and diagnosis of lung nodules can be achieved by imaging procedures, such as CT imaging [2], magnetic resonance imaging [3], etc. e diagnosis of suspected lung nodules remains difficult due to human subjectivity, fatigue, and other limitations related with CT images. In some cases, radiologists may not be able to identify some nodules with diameters <3 mm [4]. erefore, it is important to study the method of classifying benign and malignant lung nodules in a computer-aided diagnosis (CAD) system for early detection and diagnosis of lung cancer. At present, the classification of benign and malignant lung nodules in the CAD system is mainly performed by extracting the underlying features of the CT image of lung nodules, such as the shape, position, texture, and density, through machine learning methods [5]. Moreover, this classification method based on the underlying features has obtained good results in improving the accuracy of lung nodule diagnosis and reducing the labor intensity of doctors. However, the real nodule shape, size, and texture features are highly variable. And the extraction of the underlying features is generally based on manual design, thus failing to fully describe these real nodules, resulting in a low correct rate of overall detection results [6]. erefore, how to perform automatic feature extraction and selection on CT images of lung nodules has become a hot topic of research. In recent years, with the rapid development of deep learning, many studies have also demonstrated that convolutional neural networks (CNN) can be well applied to the field of medical images [7][8][9][10]. is is mainly because CNN, as an end-to-end network architecture, can automatically extract features from the input image. In the classification of lung nodules, the commonly used CNN models are the twodimensional CNN (2D-CNN) and the three-dimensional CNN (3D-CNN). Shen et al. [11] proposed a hierarchical learning framework multiscale CNN (MCNN) for lung nodule classification by extracting discriminatory features from alternating stacked layers to capture the heterogeneity of lung nodules. is network not only improves the classification accuracy, but also has strong robustness to noisy input. It is worth noting that the network architecture of MCNN consists of alternating stacks of convolutional and max-pooling layers, and takes a long time to extract features. erefore, Tran et al. [12] proposed a new deep learning method to improve the classification accuracy of lung nodules in CT. e central idea is as follows: firstly, a novel 15-layer 2D-CNN architecture is constructed to automatically extract lung nodule features and classify them as nodules or non-nodules. en, the focal loss function is used for network training to improve the classification accuracy of the model. However, with the increment of network depth, network training problems may occur, such as overfitting [13] and gradient vanishing [14]. e information behind the network is not well fed back due to deeper network depth, which results in network performance degradation. erefore, the residual network solves the gradient dispersion due to network deepening by generating residual blocks to fit the original function [15]. Based on this study, Nibali et al. [16] achieved 89.9% accuracy in lung nodule classification on the LIDC dataset by constructing a fully convolutional residual neural network, which not only fully exploited the shallow and deep features of lung nodule images, but also reduced the number of parameters. Abraham et al. [17] used three 2D-CNN (AlexNet, VGG16, and SilNet) to classify lung nodules and designed a new network model based on the obtained inference results to eliminate the deficiencies of existing networks for early prediction of lung cancer. rough the above study, it was found that 2D-CNN network has the advantages of low network complexity and fast computation, but it ignores some spatial information. is is mainly due to the fact that CT scans are 3D images, most existing CNN-based approaches use a 2D model, which cannot capture the spatial information between slices. Dou et al. [18] proposed a 3D-CNN for false positive in automated pulmonary nodule detection from CT scans. e experimental results show that compared with 2D-CNN, the 3D-CNN can encode richer spatial information and extract more representative features via their hierarchical architecture trained with 3D samples. At the same time, Fu et al. [19] developed a computer-aided lung nodule detection system using a three-dimensional deep CNN to make full use of three-dimensional spatial information. e system mainly includes two stages: lung nodule detection stage and classification stage. In particular, in the detection phase, an 11-layer 3D fully CNN is used for the first time to screen all lung nodules. Experimental results demonstrate the effectiveness of using 3D deep CNNs for lung nodule detection. Zhao et al. [20] combined multiscale feature fusion with multiattribute classification to construct a new 3D-CNN model and proposed a new loss function to balance the relationship between different attributes, achieving a classification accuracy of 93.92% on the LIDC dataset. Gao and Nie [21] proposed a method to discriminate benign and malignant lung nodules by combining deep CNN with imaging features. e central idea is as follows: firstly, segment the lung nodule region from CT images and extract the imaging features of the nodule region using traditional machine learning methods. en, train the 3D-Inception-ResNet model using the intercepted lung nodules, extract the CNN features learned by the network, combine the two types of features, and use the Random Forest (RF) model for feature selection. Finally, a Support Vector Machine (SVM) was used for the differential diagnosis of benign-malignant lung nodules. Zhang et al. [22] proposed a 3D dense network architecture by taking advantage of densely connected convolution, which encourages feature reuse and alleviates the vanishing gradient problem. e result shows that the proposed model has achieved good classification performance in the malignant suspiciousness of lung nodules and achieved 92.4% classification accuracy. rough the above research and analysis, it can be known that the classification effect of 3D-CNN is better owing to the full extraction of feature information, but it requires a large amount of data and takes a long time to calculate [23]. However, when the number of medical image datasets is small, training a 3D-CNN model from scratch will result in poor classification results. Moreover, the training algorithm of deep CNN usually adopts a layer-by-layer training mechanism based on gradient descent [24], where the network is trained layer by layer from the bottom up, and the output of the previous layer is used as the input of the next layer. e disadvantage of this learning mechanism is that the image pixels after the first layer are discarded, making the connection between the higher layers of the model and the input become sparser, which in turn causes the error correction signal to become smaller and smaller from the top layer down and tends to converge to a local minimum. In addition, when using the backpropagation algorithm to propagate the gradient, the network parameters cannot be learned effectively as the number of network layers deepens, resulting in the gradient dispersion [25]. Guided by the above studies and observations, we propose a novel deep convolutional network learning (DCN) method to obtain better performance in classifying benign and malignant lung nodules. e central idea is as follows: firstly, segment the lung nodule region from the lung CT images to obtain the lung nodule images and perform zero-phase component analysis (ZCA) whitening on the image data so that all features in the images have the same variance and low feature-to-feature correlation. en, add a multilayer perceptron layer after each convolutional layer of the constructed DCN to achieve cross-channel information interaction and integration. Finally, obtain the second derivative information of the error function directly without calculating the Hessian matrix, and introduce a momentum coefficient based on this information to improve the convergence of the network. e key contributions are summarized as follows: (1) the use of ZCA whitening to process the input data can reduce the correlation between image pixels and thus eliminate redundant information; (2) by introducing multiple perceptron layers after each convolutional layer can further enhance the expression capability of the network; (3) the use of small batch random gradient descent with additional momentum coefficients to train the deep network can effectively avoid gradient dispersion and enhance the generalization capability of the network. Network System Architecture. To use DCN to learn the features of CT images of lung nodules and enhance the representation of the model by introducing multilayer perceptron, which in turn improves the classification accuracy of lung nodule images. In addition, a momentum coefficient is introduced to improve the convergence in the minibatch stochastic gradient descent (MB-SGD) method to train the deep network model. e structural diagram of the lung nodule benign and malignant classification system based on DCN feature extraction is shown in Figure 1, which consists of three main parts. Stage I includes lung nodule image segmentation and extraction: firstly, a series of corresponding binary images are extracted from a large number of original lung CT images. en, the binary image and the original image performed an "and" symbol operation to obtain the lung nodule images. Stage II is image preprocessing: redundant information among the lung nodule images extracted in Stage I is eliminated using ZCA whitening to reduce the correlation among the input image pixels. Stage III is DCN feature learning: firstly, the lung nodule images obtained from stage II processing are used as the input of the DCN. en, a multilayer perceptron layer is introduced after each convolutional layer to realize crosschannel information interaction and integration. Finally, the MB-SGD method with a momentum coefficient is used to fine-tune the DCN to avoid the gradient dispersion problem. Lung Nodule Image Segmentation and Extraction. To study early cancer detection in high-risk populations, the National Cancer Institute (NCI) published the Lung Image Database Consortium (LIDC) by collecting medical image files of the lung and corresponding lesion annotation of diagnostic results [23]. e LIDC dataset collected 1018 clinical lung CT scans, the size is 512 × 512, and each CT scan contained a relevant XML file containing the independent diagnostic results of four experienced radiologists [20]. Among them, radiologists marked 928 lung nodules, most of which were 3-30 mm in size. e diagnostic results include the coordinates of the lung nodules larger than 3 mm in diameter and the degree of malignancy. However, because of the small size of lung nodules, it is unrealistic to classify lung nodules by processing the whole image. erefore, we need to extract the lung nodule areas based on the nodule center coordinate marked by the doctor in the XML file. In the LIDC dataset, radiologists quantified the malignancy of lung nodules on a scale of 1 to 5: neither likely, moderately unlikely, uncertain, moderately suspicious, and highly suspicious. When classifying benign and malignant levels, those with a level greater than or equal to 3 were classified as malignant, and those with a level less than 3 were classified as benign. ere is variability in the marking of nodule locations by different experts, which results in the uniqueness of nodule areas. To eliminate the differences between experts and obtain standard lung nodule images, we use the threshold probability map (TPM) method to segment lung CT scans [26]. e central idea is as follows: firstly, according to expert experience, a weight value is set for each expert's annotation to indicate the reliability of the expert annotation. en, each pixel in the lung nodule region marked by the expert is set to the same weight value. Finally, the weight value of the pixel is the sum of the weight values marked by all experts for the pixel. Now, assuming that the four experts have the same experience, the weight value of each expert is 0.25. If a pixel is marked as a component of a nodule by an expert, the probability that the pixel is a nodule is 0.25. If marked by 3 experts, the probability is 0.75. us, the lung nodule region is transformed into a mapping map with probability values between 0 and 1. When segmenting the lung nodule images, only a threshold T is set, and the pixels higher than T are set to 1, and the pixels lower than T are set to 0. is generates the corresponding binary images. Finally, this binary image is summed with the original image to obtain the lung nodule image. To improve the credibility of the study, when selecting lung nodule images in the LIDC dataset, we only considered cases in which at least three radiologists have made such a diagnosis of malignancy. erefore, we segmented and extracted images of lung nodules, and eventually obtained a total of 750 cases of lung nodules, including 353 benign cases and 397 malignant cases. Due to the inconsistent size of lung nodules, to facilitate the learning and training of the DCN, they were normalized and transformed into grayscale images with a size of 28 × 28. e processed part of the sample image is shown in Figure 2. Image Preprocessing. It is given that I � I (1) , · · · , I (i) , · · · , I (d) } ∈ R I W ×I H ×C is a collection of d images of the size I W × I H and C denotes the channel of the image. First, because the visual images are highly affected by the light, to reduce the impact of image brightness on feature learning, the images are normalized for contrast [27] using the following equation: where mean(·) is the matrix averaging function. e normalization parameter ε is introduced to suppress the generation of experimental noise and prevent the denominator Journal of Healthcare Engineering 3 from being 0. For color image and grayscale image, ε is usually taken as 10. Because of the strong correlation between adjacent pixels of the image, it contains a large amount of redundant information. To make all features in the image have the same variance and low feature-to-feature correlation, we perform ZCA whitening on the input data so that the whitened data is as close to the original data as possible with the same dimensionality. A matrix variation is performed for each image q (i) in the image collection Q � q (1) , q (2) , · · · , q (d) }, q (i) ∈ R I W ×I H ×C obtained by the contrast normalization operation, with the value of each image pixel point as an element to form d column vectors, each of length I W × I H × C, to form a I W × I H × C-row and d-column matrix of values Ψ. By performing an eigenvalue decomposition of the covariance matrix C � cov(Ψ), [V, D] � eig(C) is obtained, and then scaling the input data using the feature factors: where ξ is the whitening factor. To avoid unstable values or data overflow due to eigenvalues diag(D) close to 0, ξ can be taken as a very small positive number. Based on this, ZCA whitening is performed using equation (3), and each column of the obtained matrix corresponds to the image data after ZCA whitening. DCN for Feature Learning. In the lung nodule image classification, the category of each image is described by two classifications of benign or malignant. Based on the guidance of the above study, we used DCN to extract all lung nodule images features and used a softmax-loss classifier for binary classification. e feature learning model based on DCN is shown in Figure 1. e model consists of an input layer, a feature extraction layer, and an output layer. e learning process is as follows: firstly, a feature map in the convolutional layer is generated by convolving the same convolutional kernel based on a weight sharing strategy to reduce model complexity and training parameters. en, in the pooling layer, the features of the convolutional layer features are nonlinearly downsampled to filter out similar features, Journal of Healthcare Engineering thereby reducing computational complexity and enhancing the invariance of local features. Finally, a softmax-loss classifier is used to build a multitask classifier for the learned deep features. Compared with other DCN, we achieve crosschannel information interaction and integration by adding multiple layers of perceptron layers in the network architecture, thus further enhancing the generalization ability of the deep network. In the actual construction process, the network architecture is equivalent to the introduction of two 1 × 1 convolutional layers, which only change the convolutional kernel size and have no effect on the feature map size. Let the data of the lung nodule image after ZCA whitening be X � x (1) , · · · , x (i) , · · · , x (d) , x (i) ∈ R I W ×I H ×C . Since the preprocessed lung nodule images are grayscale images, the input data x (i) and the convolution kernel are both 2D structures. e convolution layer convolves the input data or the previous layer feature map with multiple sets of convolution kernels, and then sums the corresponding positions of the output, adds the bias term, and obtains the convolution layer feature map under the action of the activation function. Its output feature map is calculated as follows: where l is the number of convolutional layers. x l j′ denotes the j ′ output feature map of layer l. f l jj′ is the convolution kernel connecting the j feature map of layer l − 1 with the j ′ feature map of layer l. M l is the number of feature maps in layer l − 1. f(·) denotes the nonlinear activation function ReLU. * is the convolution operator. Due to the data distribution of the input, image will change after convolution operation, which leads to the internal covariance shift problem [28]. erefore, we correct the data distribution by introducing a BN layer in the network architecture. e data after BN processing is equivalent to PCA dimensionality reduction [29]; that is, the correlation between features is reduced, and the data mean and standard deviation are normalized so that the mean value of each dimensional feature is 0 and the standard deviation is 1. In the actual construction process, we generally place the BN layer between the activation function and the convolution operation, so the forwardconducting convolution calculation equation (4) is transformed as follows: e pooling layer is to downsample the feature map of the previous convolutional layer to obtain a smaller-dimensional output feature map that corresponds to the input feature map one to one. where down(·) is the downsampling function and β is the downsampling coefficient. Similar to the convolutional layer, on the pooling layer we also normalize the feature map by introducing a BN layer, and its position is generally placed between the activation function and the pooling operation so that the forward-conducting pooling calculation equation (6) is transformed into e DCN model is trained adopting a backpropagation layer-by-layer training mechanism, and the parameters to be trained are convolutional kernel f. Let y (i) n denote the mth dimension of the label corresponding to the ith sample. y (i) m denotes the m-dimension of the output corresponding to the ith sample. e squared error cost function is where M denotes the total number of categories. e update formula of the convolution kernel using small batches of stochastic gradient descent is as follows: where t denotes the current moment and η is the learning rate. It is well known that in the process of training DCN with MB-SGD method, when the gradient keeping direction changes, the error surface has different curvature along different directions, which is easy to cause the points on the surface to oscillate from one side to the other with the continuous descent of gradient so that the gradient cannot converge to the minimum value [30]. erefore, we consider retaining both the gradient vector information at the last time in the MB-SGD method and the second derivative information of the error function obtained when the network parameters are updated at the last time. is second derivative information estimates not only the gradient of the surface of the cost function at a point (first-order information), but also the curvature of the surface (second-order information). Once the curvature is calculated, the approximate location of the minimum value of the cost function can be estimated. e update formula of the convolution kernel after obtaining the second derivative information is where ∇f l jj ′ (t − 1) is the gradient function at moment t − 1. From the QuickProp theory proposed by Fahlman [31], it is known that if the step size in the convolutional kernel update formula grows too fast, it tends to cause the convergence process to diverge. erefore, a momentum factor µ is introduced to overcome the aforementioned Journal of Healthcare Engineering 5 drawback. Equation (11) is equivalent to equation (12) when the conditions of equation (11) are established. Based on the above analysis, equation (9) is transformed to Results and Discussion 3.1. Implementation Details. In this section, the deep network used in the experiments consists of three blocks, each with the same number of layers, including a convolutional layer, a multilayer perceptron layer, and a pooling layer. After the original data is input to the first block, a convolution operation with a step size of 1 and a convolution kernel size of 5 × 5 is performed on the input image to extract features. en, two perceptron layers with a step size of 1 and a convolution kernel size of 1 × 1 are used to interact and integrate feature information. Finally, a pooling layer with a size of 2 × 2 and a step size of 2 is used to downsample. e first pooling layer uses a maxpool and the rest uses an averagepool. At the end of the last block, a softmax-loss classifier is attached after the averagepool is executed. e feature mapping dimensions in the three blocks are 28 × 28, 12 × 12, and 4 × 4, respectively. e exact network configuration we use on the dataset is shown in Table 1. Training Setup. We use a MATLAB-based deep learning framework (MatConvNet) for the construction of the proposed deep network model on a workstation with a Win10 system, i9 processor, and 64G RAM. We choose stratified 10-fold cross-validation as a rigorous validation model [20]. All data is randomly divided into 10 subsets. Nine of these subsets were used for training and one for testing, which was repeated ten times. In the training stage, we used the MB-SGD method to optimize the model. We initialize the parameters according to our experience. e initial learning rate is set 0.1, and the minibatch size is 50. We adopt the weight initialization strategy described in [32] with a weight decay of 0.0001. Our experiments had carried out a total of 120 epochs. Validation of Hyperparameter. In this paper, we need to modify the proposed network parameters repeatedly by using the method of MB-SGD and finally make the result of the loss function reach the minimum value. For the MB-SGD method with a momentum coefficient, the value of the momentum coefficient μ directly affects the location of the minimum cost function, which in turn affects the classification accuracy of the lung nodule images. For this reason, it is necessary to discuss the value of μ. According to the description in the training setup, ten experiments with 10 different values of μ were conducted to select the best value. We change the value of μ from 0 to 1 at intervals of 0.1. Table 2 shows the impact of different μ values on model classification performance. It can be seen that the classification error rate of lung nodule images gradually decreased as the value of μ increases. When μ is 0.9, the model can obtain the lowest classification error rate. Classification of Benign and Malignant Lung Nodules under Different Sample Configuration Schemes. In this section, we conduct a series of experiments to find the most suitable sample configuration scheme. According to the data distribution method in the training set, a subset (75 samples) is randomly selected as the testing set. To verify the impact of the training set size on the generalization abilities of the model, we gradually increase the size of the training samples. As can be seen from the result in Figure 3, changing the size of the training samples has an impact on the classification performance of the network. As the number of training samples increases, network classification accuracy tends to increase. When the training samples are 500 or 550, our network model achieves the best classification performance. After that, the network classification performance shows a decreasing trend. is problem may be limited by the size of the number of cases in the dataset and the disappearance of a gradient. Optimizer Selection. Gradient descent algorithm is the most commonly used optimization method in machine learning, and current networks are trained using different gradient descent algorithms for network training. erefore, a good optimizer design in practical applications is of great importance to avoid the gradient dispersion of the deep network. In this section, to test the speed and stability of the proposed MB-SGD method with momentum coefficients during the training process, we plot the RL curves and analyze the effectiveness of the proposed algorithm by comparing it with the traditional MB-SGD method. e mean square error RL curve is a smoothing sequence with minimum mean square error, and its shape shows that with the increase of the number of iterations, the CNN model training process predicts the error. On the other hand, it also represents the speed and stability of the network convergence. In this experiment, the minibatch size is 50 and the training samples are 500. After training the sample data once, the weights are iteratively updated 10 times, and after performing 120 epochs of training, the weights are iteratively updated 1200 times. e experimental results are shown in Figure 4. It can be seen that the two learning algorithms have similar minimum mean square error values in the initial phase of training. With the increase of the number of iterations, the RL curves of both algorithms show a gradual decrease, which indicates that the training network is stable and reliable. It is worth noting that the RL curve of MB-SGD shows a smooth trend from the 200th iteration to the 700th iteration. In addition, the MB-SGD with a momentum coefficient has smaller minimum mean square error values than the MB-SGD. is indicates that MB-SGD with a momentum coefficient has faster and more stable convergence during network training. Deep Network Architecture Comparison. One of the most important advantages of deep learning is the ability to automatically learn relevant features from the original image. To further evaluate the effectiveness of the proposed method, classical deep learning models such as CNN, DBN, and SAE were used to extract lung nodule features and perform classification experiments under the same dataset segmentation. Table 3 gives a comparison of the classification results under different deep learning models. It can be seen that our proposed DCN model obtains superior classification performance compared to other deep learning classical models. In addition, in the classic deep learning model, the classification performance of CNN is better than that of DBN and SAE. Figure 5 shows the visualization of the lung nodule features extracted by the three deep learning classical models. e results show that the feature visualization results extracted by CNN are more abstract. Meanwhile, combined with the experimental results in Table 3, it can be seen that CNN has obvious advantages in image feature extraction. W H D W H D Input 28 28 1 conv1 28 28 1 6 5 1 0 24 24 6 cccp1 24 24 6 6 1 1 0 24 24 6 cccp2 24 24 6 6 1 1 0 24 24 6 maxpool1 24 24 6 2 2 0 12 12 6 conv2 12 12 6 12 5 1 0 8 8 12 cccp3 8 8 12 12 1 1 0 8 8 12 cccp4 8 8 12 12 1 1 0 8 8 Table 4, all methods used the LIDC database for experiments. In this study, because these nodules lack the results of histopathology, and the small size of lung nodules, it is unrealistic to classify lung nodules by processing the whole image. erefore, all methods extract the lung nodule area based on the nodule center coordinates marked by the doctor in the XML file, and process the data according to the proposed method. Finally, according to the proposed model, different numbers and types of lung nodule samples were obtained. It can be seen from Table 4, compared with 2D-CNN [4,[33][34][35], 3D-CNN [20,21,36] achieves better classification performance while using fewer lung nodule samples. is is mainly because 3D-CNN can extract spatial information from lung nodules more effectively. e results show that our proposed method outperforms the other existing works, and it also proves the effectiveness of the proposed DCN with MB-SGD for the classification of lung nodules. Moreover, compared with a single CNN classification model, a model that combines different classifiers and features fusion has better classification performance. In addition, this study has also demonstrated the importance of the process that we must choose to improve the performance of the model. Conclusions As one of the most popular research directions in the field of machine learning, deep learning can learn advanced features of data and has more powerful nonlinear representation capabilities. In this study, we propose a novel DCN learning method for the benign and malignant classification of lung nodules. e main advantages are as follows: (1) perform ZCA whitening processing on the extracted lung nodule images, which can effectively eliminate redundant information between pixels; (2) combine the multilayer perceptron layer with CNN to construct a DCN model capable of learning strongly robust features, thus improving the feature representation power of the network to a greater extent; (3) the MB-SGD method with a momentum coefficient is used to train the deep network, which effectively avoids the local optimum and gradient dispersion phenomena. e experimental results on the LIDC dataset show that the proposed DCN learning method has high classification accuracy for benign and malignant classification of lung nodules. Data Availability e data used to support the findings of this study are available from the first author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
7,513.8
2021-10-27T00:00:00.000
[ "Medicine", "Computer Science" ]
Phenotype of Cardiomyopathy in Cardiac-specific Heat Shock Protein B8 K141N Transgenic Mouse* Background: A lys141Asn (K141N) missense mutation in heat shock protein (HSP) B8 causes distal hereditary motor neuropathy (HMN). Results: HSPB8 K141N transgenic mice exhibited mild hypertrophy and apical fibrosis as well as slightly reduced cardiac function. Conclusion: A single point mutation of HSPB8, such as K141N, can cause cardiac disease. Significance: The cardiomyopathy phenotype was observed in cardiac-specific HSPB8 K141N transgenic mice. A K141N missense mutation in heat shock protein (HSP) B8, which belongs to the small HSP family, causes distal hereditary motor neuropathy, which is characterized by the formation of inclusion bodies in cells. Although the HSPB8 gene causes hereditary motor neuropathy, obvious expression of HSPB8 is also observed in other tissues, such as the heart. The effects of a single mutation in HSPB8 upon the heart were analyzed using rat neonatal cardiomyocytes. Expression of HSPB8 K141N by adenoviral infection resulted in increased HSPB8-positive aggregates around nuclei, whereas no aggregates were observed in myocytes expressing wild-type HSPB8. HSPB8-positive aggresomes contained amyloid oligomer intermediates that were detected by a specific anti-oligomer antibody (A11). Expression of HSPB8 K141N induced slight cellular toxicity. Recombinant HSPB8 K141N protein showed reactivity against the anti-oligomer antibody, and reactivity of the mutant HSPB8 protein was much higher than that of wild-type HSPB8 protein. To extend our in vitro study, cardiac-specific HSPB8 K141N transgenic (TG) mice were generated. Echocardiography revealed that the HSPB8 K141N TG mice exhibited mild hypertrophy and apical fibrosis as well as slightly reduced cardiac function, although no phenotype was detected in wild-type HSPB8 TG mice. A single point mutation of HSPB8, such as K141N, can cause cardiac disease. COS cells increased the interaction of HSPB8 and HSPB1, leading to the formation of intracellular aggregates (7). A recent study reported that an HSPB8 gene KO mouse showed normal development and no obvious abnormalities in tissue function, but cardiac dysfunction and remodeling, as well as transition into heart failure, were accelerated by pressure-induced overload to the heart (9). These results suggest that the overexpression of mutant HSPB8 can lead to the induction of some degree of cellular toxicity in cultured cells, although no similar phenotype was detected in the HSPB8 knock-out mouse. Thus, it is unclear whether a missense mutation of HSPB8 alone can cause neurodegenerative diseases such as HMN and CMT disease because of loss of function, particularly in vivo. Furthermore, it is known that the HSPB8 expression pattern is ubiquitous, with the highest expression in muscle tissue (10). This implies that mutant HSPB8 protein can be present in muscle tissue, such as the heart, and can affect cardiac function as well as neuronal tissues. To study the structure-function relationship of a single missense mutation of HSPB8 in vivo, cardiac-specific TG mice expressing HSPB8 K141N as well as wild-type HSPB8 TG mice as controls were generated using tetracycline-controlled transcriptional activator (tTA) and the attenuated myosin heavy chain system (11,12). The phenotype of HSPB8 K141N TG mice is an accumulation of aggregates containing HSPB8, causing slightly impaired cardiac function, mild ventricular hypertrophy, and apical cardiac fibrosis with cardiac mitochondrial dysfunction at approximately 6 months of age. Our results indicate that mutant HSPB8 TG mice have a cardiomyopathy phenotype, whereas no abnormality was observed in wild-type HSPB8 TG mice. Thus, our data suggest that the HSPB8 mutation can act in a dominant negative manner and that the cellular toxicity of the mutant protein may play an important role in disease development. Our results imply that the HSPB8 mutation can cause cardiomyopathy as well as neuronal degenerative diseases such as HMN and CMT disease and that a phenotype induced by the mutant HSPB8 may result from the mild toxicity of the mutated HSPB8 protein. EXPERIMENTAL PROCEDURES cDNAs-cDNAs of HSPB8 and CryAB were isolated by reverse transcription-PCR and used to generate recombinant protein and adenoviral constructs as described previously (13). The missense mutations HSPB8 K141N and CryAB R120G were introduced using reverse transcription-PCR and subcloned into the pBSKII vector (Agilent Technologies, Palo Alto, CA). To distinguish the transgenic products from endogenous protein, a FLAG epitope was introduced at the N terminus of CryAB R120G. An HA epitope was introduced at the N terminus of HSPB8 as described previously (12,13). Recombinant Protein-To produce recombinant protein, His epitope-tagged wild-type HSPB8, HSPB8 K141N, wild-type CryAB, and CryAB R120G were overexpressed in BL21 cells (Invitrogen) using the pET system (Novagen, Madison, WI) and purified with a nickel-nitrilotriacetic acid column (Qiagen) as described previously (13). To measure the amyloid oligomer level (amyloid oligomer is positive immunoreactive material against the anti-oligomer antibody), each recombinant protein was incubated and blotted on a nitrocellulose membrane and quantified as described previously (13). The cellular toxicity of the recombinant HSPB8, HSPB8 K141N, CryAB, and CryAB R120G protein in HEK293T cells as well as cardiomyocytes was determined using a 3-(4,5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) assay. Recombinant proteins were added to give a final concentration of 0.4 mg/ml in serumfree Dulbecco's modified Eagle's medium and incubated for 12 h. After incubation, MTT assays were performed as described previously (12,13). Cardiomyocyte and N1E115 Cell Cultures and Adenovirus Infection-Rat neonatal cardiomyocytes were isolated using the Worthington cardiomyocyte isolation system (Worthington Biochemical Corporation, Lakewood, NJ). After isolation of the rat neonatal cardiomyocytes, the cells were grown on glass slides coated with gelatin as described previously (12,14). Mouse neuroblastoma N1E-115 cells were cultured as described previously (15). Replication-deficient recombinant adenoviruses were made using an AdEasy system (Agilent Technologies) as described previously (5,13,14). Viral titration was determined using an AdEasy viral titer kit (Agilent Technologies), and the manufacturer's protocol was followed to calculate the infectious units/milliliter as well as the multiplicity of infection (MOI), which is the ratio of transfer vector transducing particles to cells. To address the transgene expression level to the endogenous HSPB8 level, cardiomyocytes were infected at a MOI of 1 for both the HSPB8 and HSPB8 K141N virus as described previously (13). The N1E-115 cells were infected at a MOI of 10 for each adenovirus. Cellular viability was measured using an MTT assay (12,13). Immunohistochemistry-Immunohistochemical analyses were performed as described previously (12,14). Alexa 488conjugated anti-rabbit and Alexa 568-conjugated anti-mouse antibodies and TO-PRO-3 for nuclear staining were purchased from Molecular Probes (Eugene, OR), anti-HSPB8 antibody from Imgenex Corporation (San Diego, CA), anti-HA antibody from MB Laboratories Co., Ltd. (Nagoya, Japan), and anti-cTnI antibody (MAB1691) from Millipore (Billerica, MA). The anti-oligomer antibody (A-11) was generated and used as described previously (12,14). Image J 1.38x public domain software was used to quantify the immunofluorescent intensity. The results from 30 to 50 cells were averaged for cohort comparison. Areas stained with the oligomer antibody were defined, and the average pixel intensities for the cardiomyocytes were determined for comparison (12,14). Isolation of Mitochondrial and Cytosolic Fractions-Isolation of mitochondrial and cytosolic fractions was performed as described previously (14). Hearts were homogenized in buffer containing 250 mM sucrose, 10 mM Tris-HCl (pH 7.4), 1 mM EDTA, 1 mM Na 3 VO 4 , and complete protease inhibitor mixture tablets (Roche Applied Science). The homogenates were centrifuged at 1,000 ϫ g for 10 min at 4°C to remove the nuclei. Supernatant fluids were then centrifuged again at 13,000 ϫ g for 30 min at 4°C. Pellets were washed extensively in the same buffer and centrifuged at 13,000 ϫ g for 30 min at 4°C. The mitochondrial fraction of the pellets was then resuspended in lysis buffer containing 150 mM NaCl, 50 mM Tris-HCl (pH 7.4), 1 mM EDTA, 1 mM Na 3 VO 4 , complete protease inhibitor mix-ture tablets (Roche Applied Science), and 1% Nonidet P-40. Supernatant fluids were further purified at 100,000 ϫ g for 30 min (4°C) and used as the cytosolic fraction. Immunoprecipitation Assay-Immunoprecipitation assay was performed as described previously (13,16). 200 g of mitochondrial fractions were incubated with 1 g of anti-voltage dependent anion channel (VDAC) antibody (Ab-5) (Merck) for 2 h at 4°C. 20 l of protein A/G-agarose (Santa Cruz Biotechnology, Inc., Santa Cruz, CA) were added, and the specimens were incubated overnight at 4°C. Pellets were collected by centrifugation at 2,500 ϫ g for 5 min at 4°C and washed four times with radioimmune precipitation assay buffer. The pellets were resuspended in 40 l of sample buffer, boiled for 3-5 min, and centrifuged again, and the supernatants were analyzed first by PAGE and then by Western blotting using either anti-HSPB8 (Imgenex Corporation) or anti-VDAC (Ab-5) (Merck). In some experiments, to examine the direct interaction of HSPB8 K141N and VDAC, the recombinant HSPB8 protein, as well as the HSPB8 K141N protein, was treated with the mitochondrial fractions from nontransgenic (NTG) mouse hearts at 25°C for 1 h and then washed four times with radioimmune precipitation assay buffer. Recombinant proteins were added to give a final concentration of 0.1 mg/ml in the medium. Preparation of Isolated Mitochondria and Measurement of Mitochondrial Respiratory Function-Preparation of cardiac mitochondria was performed using the method described previously (17). Heart tissue was homogenized in ice-cold buffer containing 180 mM KCl, 10 mM EDTA (pH 7.4), and 0.5% fatty acid-free BSA. The homogenate was then centrifuged at 700 ϫ g for 10 min at 2°C, and the resulting supernatant fluid was centrifuged at 8,000 ϫ g for 10 min at 2°C. The crude mitochondria were again suspended in buffer and centrifuged at 8,000 ϫ g for 10 min at 4°C. The organelles were then resuspended in suspension buffer (20 mM Tris-HCl, pH 6.8, containing 320 mM sucrose and 0.25% BSA) and used to measure mitochondrial activity. The isolated mitochondria were used for the measurement of mitochondrial respiratory function. The mitochondrial state 3 and 4 respiration, respiratory control index, and oxidative phosphorylation rate were determined using the method described previously (17). Isolated mitochondria were incubated in a medium of pH 7.4 that contained 10 mM Tris-HCl, 250 mM sucrose, 10 mM K 2 HPO 4 , and 10 mM glutamate and were stirred at 25°C. The mitochondrial oxygen consumption rate was measured in the chamber using a Clark-type oxygen electrode (Central Kagaku, Tokyo, Japan). The quality of the mitochondrial preparation was evaluated by assessing the respiratory control index, which was determined in the presence of 240 nmol of ADP. In some experiments, to examine the direct effect of HSPB8 K141N on mitochondrial oxygen consumption ability, the recombinant HSPB8 protein, as well as the HSPB8 K141N protein, was added to the medium. Recombinant proteins were added to give a final concentration of 0.1 mg/ml in the medium. Miscellaneous Methods-Sample preparation for Western blotting, gel preparation, and electrophoretic conditions were carried out as described previously (12,14). Western blot analyses were performed using anti-GAPDH antibody (Chemicon International, Temecula, CA), anti-HSPB8 antibody (Imgenex Corporation), anti-HA antibody from MB Laboratories Co., Ltd. (Nagoya), anti-VDAC antibody (Ab-5) (Merck), and anticytochrome c antibody (BD Bioscience, CA). The band intensity in the immunoblot was semiquantified using Image J. The filter assay for the detection of aggregates was performed as described previously (12). Lysate from both the transfected cells and the transgenic mouse hearts was centrifuged at 12,000 ϫ g for 10 min. The resultant pellet was diluted into 0.2 ml of 2% SDS and boiled for 5 min. After boiling, the sample was filtered through a 0.2-m nitrocellulose membrane. The aggregate fraction on the membrane was detected with anti-HSPB8 antibody. Echocardiography and trichrome staining were performed as described previously (12). Transgenic Mice-Female mice with cardiac-specific overexpression of mutant HSPB8 containing the K141N mutation, driven by the modified ␣-myosin heavy chain promoter, have been described previously (12,18). The TG mice were identified by PCR analysis of genomic DNA isolated from tail tips. The responder HSPB8 and HSPB8 K141N mice were crossed with tetracycline-controlled tTA TG mice to generate tTA/ HSPB8 and tTA/HSPB8 K141N double TG (tTA/HSPB8 TG and tTA/HSPB8 K141N TG) mice (12,18). The responder HSPB8 and HSPB8 K141N mice used for all experiments had a C57BL/6Cr Slc genetic background (SLC, Shizuoka, Japan). The CryAB R120G and tTA TG mice were backcrossed with C57BL/6Cr Slc mice more than 10 times and maintained on a C57BL/6Cr Slc background as described previously (12). NTG littermates were always used as controls for comparison. The animals were housed in microisolator cages in a pathogen-free barrier facility. All experimentation was performed under approved institutional guidelines. Statistics-The data are expressed as the means Ϯ standard error. Statistical analysis was performed using the unpaired Student's t test and one-way analysis of variance followed by a post hoc comparison with Scheffe's multiple comparison, using Statview version 5.0 software (Concepts, Inc., Berkeley, CA). Ethics-This study was approved by the Animal Care Committee of Iwate Medical University (approval identification 21-056). All of the experimental procedures were performed in accordance with the Guidelines of the Iwate Medical University Ethics Committee for Animal Treatment and the Guidelines for Proper Conduct of Animal Experiments by the Science Council of Japan. HSPB8 K141N in Cardiomyocytes and N1E115 Cells-Although a missense mutation in HSPB8, a member of the small HSP family, causes HMN and CMT disease, which is characterized by the formation of inclusion bodies in cells, the role of the mutant HSPB8 protein in other tissues remains uncertain. To study this, we analyzed the distribution of the HSPB8 protein in mouse tissue (Fig. 1A). HSPB8 protein was detectable in most tissues, such as the cerebrum, spinal cord, heart, and aorta ( Fig. 1A), although the levels were not uniform among these tissues. The levels of HSPB8 in the heart and aorta were much higher than those in the cerebrum and spinal cord. These results imply that the HSPB8 missense mutation that causes HMN and CMT disease may affect the tissue function of heart and vascular tissues. To address the effect of the HSPB8 mutant protein in cardiomyocytes, we expressed HSPB8 and HSPB8 K141N using an adenoviral vector. In this experiment, we used adenoviral vector at a MOI of 1 to express HSPB8 as well as HSPB8 K141N (Fig. 1B). Under this experimental condition, the protein level of the expressed HSPB8 protein, as well as that of the HSPB8 K141N protein, was similar to the endogenous HSPB8 protein level in cardiomyocytes expressing LacZ, because viral expres-sion of HSPB8 resulted in the down-regulation of the endogenous HSPB8 protein (Fig. 1, B and C). Similar HSPB8 protein levels were observed among wild-type HSPB8, HSPB8 K141N, and LacZ cardiomyocytes (Fig. 1, B and C), and cardiomyocytes expressing the wild-type HSPB8 and LacZ showed no detectable aggregates. Despite this, perinuclear aggregates that were immunoreactive against an HSPB8 antibody were observed in cardiomyocytes expressing HSPB8 K141N (Fig. 1, D-F). These results suggest that, in a manner similar to that of the CryAB missense mutation (5), the HSPB8 missense mutation can induce aggregate formation in cardiomyocytes. A slight reduction in cellular viability was observed in cardiomyocytes expressing HSPB8 K141N (Fig. 1G), whereas the CryAB missense mutation led to a significant reduction in cellular viability, as described previously (5). Similar to the results for cardiomyocytes, mild cellular toxicity of HSPB8 K141N and severe cellular toxicity of CryAB R120G were observed in N1E115 mouse neuroblastoma cells (Fig. 1H). This finding may imply that mutant HSPB8 is less toxic than mutant CryAB in cultured cells such as cardiomyocytes and neuroblastoma cells. Amyloid Oligomer Formation of HSPB8 K141N-To compare the cellular toxicity and amyloid oligomer formation of the HSPB8 K141N and CryAB R120G proteins, we generated recombinant proteins ( Fig. 2A). Recombinant HSPB8 K141N protein showed immunoreactivity against an anti-amyloid oligomer antibody, and its reactivity was similar to that of the recombinant CryAB R120G protein. The immunoreactivities of recombinant wild-type HSPB8 protein and recombinant wild-type CryAB protein were weaker than those of the recombinant mutant HSPB8 and the mutant CryAB proteins (Fig. 2, B and C). When the recombinant HSPB8 K141N protein was added to the culture medium, mild cellular toxicity in 293T cells as well as in cardiomyocytes was observed, but this cytotoxicity was less than that of the recombinant CryAB R120G protein (Fig. 2, D and E). These results suggest that although both the HSPB8 K141N protein and the CryAB R120G protein can form an amyloid oligomer at a level detectable by the antioligomer antibody, the cytotoxicity of HSPB8 K141N is milder than that of the CryAB R120G protein. tTA/HSPB8K141N Double TG Mouse-Our in vitro study showed that the missense mutation of HSPB8 led to the formation of a toxic amyloid oligomer and that HSPB8 K141N showed mild cellular toxicity in vitro. To further study the effect of the HSPB8 missense mutation on cardiomyocytes in vivo, we generated a cardiac-specific TG mouse that overexpressed the HSPB8 K141N mutation or wild-type HSPB8 using an inducible cardiac-specific ␣-myosin heavy chain promoter as described previously (11,12). The level of HSPB8 proteins observed in the HSPB8 and HSPB8 K141N TG mice was 3-fold higher when they were cross-bred with tTA TG mice than that of NTG mice (12); HSPB8 K141N proteins were also observed (Fig. 3, A and B). No differences were observed in the HSPB8 HSPB8 K141N. B, dot blotting shows the presence of the amyloid oligomer in the recombinant HSPB8 K141N and CryAB R120G proteins, but a lower amyloid oligomer level is detected in wild-type HSPB8 (HSPB8 WT) and CryAB (CryAB WT). C, quantitative amyloid oligomer analysis. The values are the fold increases relative to 0.5 g of CryAB WT, whose value was arbitrarily set to 1. D and E, cellular toxicity of recombinant proteins. The cellular HSPB8 K141N protein toxicity in HEK293 cells (D) and cardiomyocytes (E) was determined using the MTT method. The values are the fold increase relative to cells with the buffer, added without recombinant proteins (Buffer), whose value was arbitrarily set to 1. *, p Ͻ 0.05; **, p Ͻ 0.01; ***, p Ͻ 0.001 versus CryAB WT; #, p Ͻ 0.05; ####, p Ͻ 0.001 versus HSPB8 WT. a, p Ͻ 0.05; aa, p Ͻ 0.01; aaa, p Ͻ 0.001 versus buffer. b, p Ͻ 0.05; bbb, p Ͻ 0.001 versus CryAB. c, p Ͻ 0.05 versus HSPB8. protein level between tTA/HSPB8 double TG mice and tTA/ HSPB8 K141N double TG mice (Fig. 3, A and B). Furthermore, although the protein levels were similar, the tTA/HSPB8 K141N double TG mice showed increased heart weight/body weight ratios compared with those of tTA/HSPB8 double TG mice and NTG mice at 6 months of age (Fig. 3, C and D). At 6 months, intracellular aggregates that were immunoreactive against anti-HSPB8, as well as an anti-HA epitope that was introduced into the transgene products to distinguish them from endogenous protein, were detected in the hearts of the tTA/HSPB8 K141N double TG mice, whereas no aggregates were observed in the hearts of tTA/HSPB8 and NTG mice (Fig. 4, A and B). These aggregates contained the amyloid oligomer in the tTA/HSPB8 K141N double TG mice (Fig. 4, A and D). Thus, concomitant with cardiac hypertrophy, the mutant HSPB8 protein can result in the formation of aggresomes in the heart that are immunoreactive against the anti-oligomer antibody. The hearts of the HSPB8 K141N double TG mice exhibited mild apical cardiac fibrosis at 6 months of age (Fig. 5A). Cardiac performance, such as fractional shortening and ejection fraction, was slightly reduced in tTA/HSPB8 double TG mice compared with that in tTA/HSPB8 TG and NTG mice at 6 months of age (Table 1 and Fig. 5, B and C). At 6 months, the oxidative phosphorylation ratio, respiratory control index, and oxygen consumption in state 3 were markedly decreased in the mitochondrial fraction from tTA/HSPB8 K141N TG mice compared with tTA/HSPB8 and NTG mice (Fig. 5, D-F). No distinguishable difference in terms of premature death was detected up to 2 years of age, and no distinguishable difference in terminal TUNEL-positive apoptosis of cardiomyocytes was observed at 6 months of age among tTA/HSPB8 K141N double TG mice, tTA/HSPB8 double TG mice, or NTG mice (data not shown). These results suggest that cardiac disease, such as cardiac fibrosis and reduced cardiac function, occurs in response to the presence of the mutant HSPB8 protein and that this cardiac disease may be associated with mitochondrial dysfunction in the heart. Interaction of HSPB8 K141N Protein with Mitochondrial Protein-A missense mutation of CryAB, such as R120G, results in the accumulation of a mutant protein around nuclei, called an aggresome; this mutant protein can bind directly to VDAC protein (16). Because marked mitochondrial dysfunction of the heart in CryAB R120G TG mice was clearly observed, this protein interaction between the mutant CryAB and VDAC may play an important role in the cellular toxicity of the mutant CryAB (16). HSPB8, a member of the small HSP family along with CryAB, may be directly associated with mitochondria. To examine this hypothesis, we analyzed the distribution of HSPB8 in cardiomyocytes, as well as in double TG mouse hearts (Fig. 6, B and C). Typical mitochondrial markers, such as cytochrome c and VDAC, were enriched in the mitochondrial fraction from mouse hearts and cardiomyocytes, whereas the cytosolic marker GAPDH was enriched in the cytosolic fraction (Fig. 6A). Similar amounts of wild-type HSPB8 or HSPB8 K141N proteins were detected in the cytosolic fraction after expression of HSPB8 or HSPB8 K141N in the cardiomyocytes infected at a MOI of 1 for each adenovirus, whereas a higher level of the HSPB8 K141N protein was observed in the mitochondrial fraction compared with that expressing the wild-type HSPB8 (Fig. 6B). This difference in the HSPB8 K141N protein distribution pattern was also observed in the hearts of double TG mice (Fig. 6C). A distribution pattern, namely, a trend toward high amounts of the mutant protein in the mitochondrial fraction, was also detected in the CryAB R120G protein (Fig. 6D). These results suggest that a missense FIGURE 3. Characterization of the tTA/HSPB8 K141N double TG mouse. A, typical images of HSPB8 Western blot analysis. An increase in the transgene product, which had an HA epitope tag inserted at the N terminus, was observed. Anti-GAPDH antibody was used as a loading control. B, quantitative analysis of HSPB8. The values are the fold increases relative to values in NTG mouse hearts, whose values were arbitrarily set to 1. C, representative images of hearts from 6-month-old TG mice. D, ratios of heart weight to body weight. A significant increase in heart weight was observed in the tTA/HSPB8 K141N double TG mouse. *, p Ͻ 0.05; ***, p Ͻ 0.001 versus the NTG. ###, p Ͻ 0.001 versus tTA/HSPB8 double TG mice. mutation of small HSPs, such as HSPB8 K141N and CryAB R120G proteins, can change their localization from cytoplasm to mitochondria, probably because of alteration of their affinity for partner proteins. In a previous study, we showed that CryAB R120G can bind directly to VDAC protein with high affinity compared with wild-type CryAB (16). This result may imply that the HSPB8 missense mutation may be able to bind to VDAC protein. To examine this hypothesis, we performed immunoprecipitation assays using an anti-VDAC antibody (Fig. 6E), which showed a detectable interaction with HSPB8, but almost no interaction with HSPB8 derived from hearts in which NTG or wild-type HSPB8 was overexpressed (Fig. 6E). This result suggests that HSPB8 K141N can interact with VDAC protein in mice. To confirm this result, recombinant HSPB8 and HSPB8 K141N proteins were added to the mitochondrial sample from NTG mice. Similar to the TG mouse study, a detectable interaction with the recombinant HSPB8 K141N but almost no interaction with the recombinant HSPB8 was observed (Fig. 6F). Mitochondrial oxidative phosphorylation was reduced by treatment with recombinant HSPB8 K141N protein compared with wildtype HSPB8-treated mitochondria (Fig. 6G). These results suggest that HSPB8 K141N protein can interact with mitochondria and that this interaction may be associated with a reduced mitochondrial oxygen consumption rate. DISCUSSION In the present study, we showed that cardiac-specific overexpression of HSPB8 K141N can cause cardiomyopathy, whereas no obvious phenotype was observed in the overexpres- sion of wild-type HSPB8. Similar results were observed in other TG mice in which disease is caused by missense mutations of small HSPs, such as CryAB R120G (5), CryAA R116C (6), HSPB1 S135F, and P182L (19). The phenotype of the cardiacspecific CryAB R120G TG mouse includes the accumulation of CryAB aggregates as well as severe cardiac disease at approximately 6 months of age (5). Studies have shown that expression of CryAA R116C in lens tissue results in posterior cortical cataracts and structural abnormalities (6) and that the neuronalspecific overexpression of HSPB1 S135F and P182L leads to CMT disease or distal HMN (19). In contrast to TG mice expressing the mutant HSPs, TG mice expressing wild-type small HSPs, such as CryAB, CryAA, and HSPB1, were indistinguishable from age-matched NTG control mice (5, 6, 19). Fur-thermore, phenotypes were seldom observed in CryAB and HSPB2 KO mice (20) or HSPB8 KO mice (9). These small HSP KO mice showed normal development and no obvious abnormalities in tissue function, but the stress response to pressureinduced overload to the heart was impaired (9,21). These data indicate that the HSPB8 K141N mutation can cause cardiomyopathy on its own, is dominant negative, and results in cardiac hypertrophy. The mutant protein may play an important role in neurodegenerative disease formation, such as HMN and CMT disease. There has been no published research regarding the cardiac phenotype induced by the missense HSPB8 mutation in humans because HSPB8 missense mutations were shown to cause HMN and CMT disease (8, 22, 23). The expression pat- tern of HSPB8 is ubiquitous, and the highest expression occurs in muscle tissue (10). This implies that mutant HSPB8 protein can be present in muscle tissues, such as the heart. The HSPB8 K141N protein showed mild cellular toxicity in both rat neonatal cardiomyocytes and cultured mouse neuroblastoma cells, N1E115 cells. Cardiac toxicity of the mutant HSPB8 was also detected in TG mice in our study. Because the cardiac HSPB8 expression level is higher than in most other human tissues (8,22,23) and because this expression pattern is similar to the results obtained in mice in the present study, these results imply that the HSPB8 missense mutations, such as K141N and K141E, are present in cardiomyocytes and peripheral neuronal cells. HSPB8 K141N protein overexpression using an ␣-myosin heavy chain promoter can cause aggresomal accumulation and amyloid formation in cardiomyocytes without any modification of noncardiomyocytes. Furthermore, no obvious phenotype was observed with wild-type HSPB8 overexpression. Thus, the possibility that the present experimental approach, which HSPB8 K141N (his-HSPB8 K141N) proteins. G, direct effect of recombinant HSPB8 (his-HSPB8) as well as HSPB8 K141N (his-HSPB8 K141N) proteins on oxidative phosphorylation ability in isolated mitochondria isolated from NTG mice (n ϭ 6). *, p Ͻ 0.05; **, p Ͻ 0.01 versus mitochondria treated with his-HSPB8. Because it is known that Arg-116 in CryAA and Arg-120 in CryAB correspond to Lys-141 in HSPB8 in the ␣-crystallin domain of small HSPs and because disease-causing missense mutations, such as R120G, R116C, and K141N, were found in these amino acids, similar underlying mechanisms of cellular toxicity may be present among these mutant small HSPs (7). To address this, we examined amyloid oligomer formation for the recombinant HSPB8 K141N and CryAB R120G proteins. Recombinant HSPB8 K141N protein showed immunoreactivity against an anti-amyloid oligomer antibody that detected the structural characteristics of amyloid protein (24), and its reactivity was similar to that of the recombinant CryAB R120G protein, whereas the immunoreactivities of recombinant wildtype HSPB8 protein and recombinant wild-type CryAB protein were weaker than those of the mutant recombinant proteins. Because it is hypothesized that amyloid oligomers can permeabilize cellular membranes and lipid bilayers, which may represent the primary toxic mechanism of amyloid pathogenesis (14,25), cellular toxicity induced by the amyloid oligomers is associated with mitochondrial function as well as induction of apoptotic cell death by cytochrome c release from mitochondria (12,16). Marked mitochondrial dysfunction was observed in the hearts of HSPB8 K141N TG mice in this study as well as in CryAB R120G TG mice (12,16). As we have shown, mutant HSPB8 and mutant CryAB proteins (16) can be associated with mitochondrial protein, including VDAC, a regulatory protein of mitochondria transition pores. The recombinant HSPB8 K141N protein can also be associated with VDAC protein; this association may play a role in the reduction in mitochondrial oxidative phosphorylation. Although the physiological significance of the protein interaction between mutant small HSPs and VDAC protein remains unclear, and the molecular mechanisms of mitochondrial dysfunction induced by mutant small HSPs are currently unknown, the association of mutant small HSPs with mitochondria may play an important role in cellular toxicity as well as the formation of diseases such as cardiomyopathy, cataracts, HMN, and CMT disease. In our previous study, we showed that nicorandil, a mitochondrial K ATP -sensitive channel opener, could partially inhibit disease progression in CryAB R120G TG mice without any reduction in mitochondrial CryAB translocation (14). This suggests that the translocation of mutant HSP is not associated with disease progression in CryAB R120G TG mice hearts. Similar to mitochondrial CryAB R120G translocation, the physiological significance of mutant HSPB8 translocation in cardiac disease in HSPB8 K141N TG mice remains unclear. Recently, an interaction between STAT3 (a transcription factor) and HSPB8 (which is related to STAT3 translocation to the mitochondria) has been shown in HSPB8 knock-out mice, where both mitochondrial STAT3 translocation and respiration were significantly decreased (9). Thus, it is possible that interaction between HSPB8 and STAT3 or VDAC plays an important role in the regulation of transcription and in mitochondrial oxidative phosphorylation ability and that these protein interactions can be altered by missense mutation such as K141N. Further research is needed to clarify the cause-andeffect relationship in HSPB8 K141N cellular toxicity. Although the mutant HSPB8 showed immunoreactivity against the anti-oligomer antibody similar to that of the mutant CryAB, the cellular toxicity of the mutant HSPB8 was milder than that of the mutant CryAB. Similar to the degrees of cellular toxicity observed in vitro, the cardiac disease observed in HSPB8 K141N TG mice was more benign than that in CryAB R120G TG mice, although amyloid oligomer-positive aggresomes were detected in cardiomyocytes from the TG mouse hearts (5,12,14). These in vivo and in vitro results indicate that the immunoreactivity against the anti-oligomer antibody is somewhat dissociated from the cellular toxicity. Another study also suggested that immunoreactivity against an anti-oligomer antibody is observed in native wild-type HSPs, such as human HSP27, HSP40, HSP70, HSP90, yeast HSP104, and bovine Hsc70 (23). Thus, anti-oligomer antibody immunoreactivity can be present in native proteins, particularly HSPs. Missense mutations from arginine or lysine (a positive-charged amino acid), to glycine or cysteine can markedly alter protein structure, because it is known that a positive charge must be preserved at this position for the structural and functional integrity of small HSPs (26,27). Thus, higher immunoreactivity against an anti-oligomer antibody in mutant small HSPs may result from altered protein structure because of the loss of charge and changes in overall surface hydrophobicity of the protein (26). The reason for the higher cellular toxicity of the mutant CryAB compared with that of the mutant HSPB8 remains uncertain in this study. One possible explanation is that the cellular toxicity of the mutant CryAB R120G protein may be induced by multiple factors. Previous studies showed that CryAB can bind to many contractile proteins, such as desmin, actin (28), and titin (29). Thus, the CryAB R120G protein may retain the ability to interact with contractile proteins, and this interaction may be enhanced relative to the normal affinity of the protein, as is the case with FBX4, a member of the F-box family of proteins (30). In addition, CryAB negatively regulates apoptosis by inhibiting caspase-3 activation (31). CryAB R120G, which is defective in chaperone activity, binds tightly to nascent contractile proteins, preventing them from folding correctly and integrating into productive sarcomeres (5). The presence of contractile protein fragments within the aggregates suggests that mutant CryAB binding may directly disturb contractile protein function, rendering muscle tissue particularly sensitive to the action of CryAB R120G (16). Thus, the pathogenesis of mutant CryAB probably reflects a synergistic combination of these mechanisms. Further study will be needed to address these structure-function relationships in mutated small HSPs. A previous study showed that overexpression of wild-type HSPB8/H11 kinase resulted in cardiac hypertrophy in mice (23). In contrast, our previous study (12), as well as the present study, showed that no obvious phenotype is observed by overexpression of HSPB8 in the heart. The reasons for the different findings between the previous study by another group and our results are uncertain. One possibility involves gene constructs as the previous study used human HSPB8/H11 kinase followed by a C-terminal HA tag to generate TG mice, whereas we used mouse HSPB8/HA followed by an N-terminal HA tag. The genetic background of the mice is another difference between the two studies (FVb/n strain in the previous study and C57BL/6 background in our study). All of the results suggest that overexpression of mouse mutant missense HSPB8, such as K141N ϳ3-fold higher than that of NTG, can result in mild cardiac hypertrophy, whereas no obvious phenotype is observed by the same level of overexpression of wild-type mouse HSPB8 in the C57Bl/6 strain. The hearts of HSPB8 K141N double TG mice exhibited mild apical cardiac fibrosis at 6 months of age. The aggresomal and amyloidal formation in cardiomyocytes can induce mechanical deficits in passive cytoskeletal stiffness in the heart and can observed cardiac wall stress (32). An increase in wall stress, which can cause relatively hypoxic conditions particularly at the apex and subendocardial region of the heart, may be associated with cardiac fibrosis. Further study on the observed in wall stress and cardiac fibrosis is required. HSPB8 expression induced by an adenoviral vector led to a reduction in endogenous HSPB8 in cardiomyocytes (Fig. 1B). This result is also observed in mouse hearts that overexpress HSPB8 using an ␣-myosin heavy chain promoter (12), whereas no alteration in endogenous gene expression was observed by the overexpression of other small HSPs, such as CryAB, using the same system (13). Thus, HSPB8 may modify its own gene expression. Because HSPB8 can translocate to the nuclei and can modulate the function of transcription factors such as STAT3, HSPB8 may play a self-regulating role in its gene expression (9). Summary-Overexpressing HSPB8 K141N resulted in increased perinuclear HSPB8-positive aggregates containing amyloid oligomer and mild cellular toxicity, whereas no aggregates or cellular toxicity were observed in myocytes overexpressing wild-type HSPB8 in vitro and in vivo. Recombinant HSPB5 K141N protein showed reactivity against an antioligomer antibody. The reactivity of the mutant HSPB8 protein was much higher than that of the wild-type HSPB8 protein. Thus, a missense mutation of HSPB8 such as K141N can affect cellular function in cardiomyocytes and may cause cardiomyopathy as well as HMN and CMT disease.
8,045.8
2013-02-06T00:00:00.000
[ "Biology", "Medicine" ]
NURR1 Downregulation Favors Osteoblastic Differentiation of MSCs Mesenchymal stem cells (MSCs) have been identified in human dental tissues. Dental pulp stem cells (DPSCs) were classified within MSC family, are multipotent, can be isolated from adult teeth, and have been shown to differentiate, under particular conditions, into various cell types including osteoblasts. In this work, we investigated how the differentiation process of DPSCs toward osteoblasts is controlled. Recent literature data attributed to the nuclear receptor related 1 (NURR1), a still unclarified role in osteoblast differentiation, while NURR1 is primarily involved in dopaminergic neuron differentiation and activity. Thus, in order to verify if NURR1 had a role in DPSC osteoblastic differentiation, we silenced it during all the processes and compared the expression of the main osteoblastic markers with control cultures. Our results showed that the inhibition of NURR1 significantly increased the expression of osteoblast markers collagen I and alkaline phosphatase. Further, in long time cultures, the mineral matrix deposition was strongly enhanced in NURR1-silenced cultures. These results suggest that NURR1 plays a key role in switching DPSC differentiation toward osteoblasts rather than neuronal or even other cell lines. In conclusion, DPSCs represent a source of osteoblast-like cells and downregulation of NURR1 strongly prompted their differentiation toward the osteoblastogenesis process. Introduction The regenerative medicine is increasing its interest in using adult stem cells for the regeneration of mineralized tissues. Specifically, wide variety of postnatal MSCs have been identified in the dental tissues in the past decade. In particular, DPSCs can be isolated from the dental pulp of adults, a tissue containing the progenitors of the dentinogenic lineage and thus physiologically involved in the reparative processes of dentin [1][2][3]. Although the regenerative process of the dentin/pulp complex is not well understood, it is known that the reparative dentin is deposed as a protective barrier for the pulp as a consequence of trauma or cavity [4,5]. DPSCs are normally quiescent, but, following injuries that cause odontoblast death, they can resume their biological activity. Thus, in response to stimuli located on pulp-dentin interface, DPSCs are recruited at the site of the lesion and differentiate into odontoblasts synthesizing reparative dentin and preserving tooth vitality. Previous works showed that DPSCs can be considered odontoblast/osteoblast precursors because they express osteogenic markers and are responsive to many growth factors for osteo/odontogenic differentiation [6][7][8]. In addition, dental pulp cells are capable of forming mineral matrix nodules [2,[9][10][11]. Actually, it has been demonstrated that DPSCs can differentiate toward multiple cell lineages; hence, when stimulated with the appropriate culture media, they showed the capacity to differentiate into chondrocytelike, adipocyte-like, and osteoblast-like cells [12][13][14][15][16][17]. Consistently, more studies showed that DPSCs, when properly stimulated, can be induced to differentiate into neuronallike and glial cells expressing the typical markers nestin and glial fibrillary acidic protein (GFAP) [18][19][20][21]. In addition, DPSCs showed to differentiate into osteoblast-like cells, express the main bone matrix protein collagen I (Col1), the typical osteoblast enzyme alkaline phosphatase (ALP), and form nodules of mineralized matrix [2,15,[22][23][24]. This suggests the presence of different niches of progenitors/stem cells in the pulp with a multipotency of differentiation that can be intercepted and altered by the appropriate stimuli. Morphological characteristics of DPSCs were compared to those of mesenchymal stem cells (MSCs) from bone marrow; the comparison showed many similarities [2,13]; it is also relevant that gene expression profiles of the two cell populations were very similar [25][26][27]. The finding of the differentiation potential of DPSCs led the scientists to consider them as an alternative source of postnatal stem cells. In particular, the ability to differentiate into osteoblast-like cells, which are able to deposit a mineralized matrix, has revolutionized the dental research and opened new perspectives for reconstructive surgery and calcified tissue bioengineering. The literature data on dental stem cells are so promising that American companies, with the approval by the Food and Drug Administration (FDA), provide a service of isolation and preservation of these cells where the onset of disease would make their use beneficial in therapy. Although the plasticity of DPSCs and their ability to generate many different cell lines are already known, what genes are involved in the multilineage differentiation ability of these cells and in their osteoblastic differentiation process remains unclear and needs to be deeply investigated, since osteoblastogenesis is influenced by many cytokines and genes [28,29]. We have reported in a previous work that DPSCs express the nuclear receptor NURR1 in basal and in osteogenic conditions [23], a surprising finding, considering that NURR1 is a member of the nuclear steroid/thyroid receptor superfamily, expressed primarily in the central nervous system, essential for the survival and development function of dopaminergic neurons of the ventral nuclei of the brain [30]. Indeed, the expression of NURR1 was already described in DPSCs and SHEDs, but a role for the receptor was mostly attributed during the differentiation toward a neuronal phenotype [19,[31][32][33]. Actually, a couple of works reported, in mice calvarial osteoblast and MC3T3-E1, that NURR1 increased the expression of osteoblastic markers [34,35]. Conversely, a more recent work described a cross talk between NURR1 and β-catenin where NURR1 inhibited β-catenin-mediated expression and βcatenin was capable of inhibiting the transcriptional activity of NURR1 [36]. So far, NURR1 is expressed in DPSCs, but its role in the osteogenic differentiation is still controversial and needs more investigations. Thus, having established that DPSCs are an excellent model for studying the osteoblast differentiation [2,15,[22][23][24], in this work, we knock down NURR1 in DPSCs, by using the gene silencing technology, and elucidated the effect and the role of Nurr1 in osteoblast differentiation. Osteogenic Trigger Inhibits Neuronal Markers Expression in DPSCs. To confirm that DPSCs, following the osteogenic differentiation treatment, commit to osteoblastic lineage and lose their multipotency, we analyzed the expression of the neuronal protein nestin and the astrocytes marker GFAP. The cells were cultured in presence of osteogenic media, and the total cell lysates were collected at different time points (T0, 4, 8, and 12 days) to be analyzed by Western blotting. Figure 1 shows that both nestin and GFAP are expressed during the first phases of osteogenic differentiation, but their expression became dramatically reduced after 8 days of culture. These results demonstrated that, during the first days (4-8) of osteogenic differentiation, DPSCs continue to maintain neural potentials, or perhaps not all the cells are already committed while, after 8 days of culture in osteogenic medium, the neuronal potential of DPSCs appeared completely suppressed. NURR1 Expression Was Knocked Down in DPSCs. Our previous work, showing that NURR1 was expressed in DPSCs in basal conditions and still present when the cells differentiated into osteoblast-like cells [23], prompted us to deeper investigate the role of NURR1 in DPSCs during the differentiation toward osteoblastic lineage. To this purpose, we used siRNA to knock down NURR1 expression in DPSCs from time zero (T0) during the whole differentiation process. The cells were seeded in osteogenic medium and the silencing sequences NURR1 (SIL) or scramble (CTR) were added every 48 hrs in order to keep NURR1 downregulated. All cell lysates were collected and subjected to qPCR showing a dramatic reduction of Nurr1 mRNA in silenced samples relative to CTR at the all analyzed time points (2, 4, 6, and 8 days) (Figure 2(a)). Detection of NURR1 protein levels was performed by Western blotting, confirming the decrease of the protein in NURR1 silenced cells (Figure 2(b)). NURR1 Downregulation Favors the Osteogenic Differentiation of DPSCs. Once verified that NURR1expression was silenced during the osteoblastic differentiation of DPSCs, we analyzed how NURR1 knockdown could influence the osteogenic differentiation of DPSCs. Osteoblastic markers such as ALP, Col1, Runx-2, osteoprotegerin (OPG), osteopontin (OPN), and osteocalcin (OCN) were studied by qPCR: a schematic panel of the results is shown in Table 1. However, the osteogenic markers that were significantly influenced by NURR1 downregulation have been described in details below. The expression of the typical osteoblast early markers Col1 and ALP was determined by qPCR ( Figure 3). Col1 mRNA level increased in the CTR cells, along the analyzed differentiation steps (Figure 3(a)), as well as ALP ( Figure 3(b)), confirming that DPSCs cultivated in osteogenic medium acquired the typical osteoblastic features. Intriguingly, the expression of Col1 significantly increased in NURR1 silenced cells compared to CTR cells with a significant trend at 6 and 8 days, as did ALP at 8 days, suggesting that NURR1 downregulation favors the osteogenic differentiation of DPSCs. The expression trend of Col1 was further confirmed, in NURR1 silenced and CTR cells, by Western blot analysis. As shown in Figure 3(c), Col1 protein level increased in NURR1 silenced cells if compared with CTR cells at 2-6 days, thus confirming the mRNA data. The molecular result of ALP trend was further supported by the histochemical evaluation of ALP expression. The histochemical assay was performed on DPSCs CTR and siNURR1 after 8 days of osteogenic differentiation (Figure 4(a)). As revealed by the purple staining, ALP expression was significantly more abundant in siNURR1 cells compared to CTR cells (~150%) corroborating the idea that NURR1 expression must be downregulated to prompt the cells to osteogenic lineage. Downregulation of NURR1 in DPSCs Favors the Mineral Matrix Deposition Ability. To further investigate the role of NURR1 in osteoblast differentiation of DPSCs, we cultured CTR and NURR1 silenced cells in mineralizing conditions. The silencing sequences (siNURR1) or scramble (CTR) were added every 48 hrs in order to keep NURR1 downregulated. A histochemical assay was used to analyze how NURR1 knockdown could influence the ability of DPSCs to mineralize. As showed in Figure 4(b), the capacity of DPSCs to mineralize was highly enhanced in NURR1 silenced cells compared to CTR cells (200%). These results are in agreement with the increased expression of osteoblast markers Col1 and ALP and confirmed the finding that NURR1 is expressed in undifferentiated DPSCs, but down levels of the receptor prompt the differentiation of the cells toward the osteoblastic lineage and mineral matrix deposition. Discussion So far, NURR1 has been considered primarily involved in dopaminergic neurons differentiation and activity. Interestingly, a key role for the receptor was attributed during the differentiation of DPSCs toward a neuronal phenotype [19,[31][32][33]. Indeed, NURR1 is crucial for dopaminergic neuron function [37] and its malfunction has been correlated with neurological and inflammatory disease [38,39]. By contrast, literature data about NURR1 role in osteoblasts are controversial: studies in mice highlighted an effect in increasing the osteoblastic phenotype of primary culture and osteoblastic cell lines [34,35], while a more recent work indicated that NURR1 downregulated the main osteoblastic differentiation pathway, involving β-catenin, in a human osteoblastic cell line [36]. In addition, we found that MSCs such as DPSCs express NURR1 in basal and osteogenic conditions [23]. Thus, NURR1 is expressed in DPSCs, with a prominent role in neuronal differentiation, but its role in the osteogenic differentiation needs more investigations. osteogenic process. Both markers were expressed in DPSCs during the first phases of osteogenic differentiation, perhaps the cells still retaining a neuronal potency, but dramatically decreased after 8 days of culture, indicating the expected result that osteoblast differentiation triggers decreased DPSC neuronal potential. Tissue regeneration, based on adult stem cells approach, is still facing with strategies directed to control and increase their differentiation capacity; thus, the discovery of target molecules to modulate, in order to address the desired commitment, is still an open challenge [40]. Mainly, MSC multipotency has the problem in regeneration therapy to drive cell differentiation to the correct lineage, reconstructing the expected mature tissue. The role of NURR1 in osteoblast differentiation is not yet clearly established and is intriguing, since it is expressed in MSCs during osteoblastogenesis [23,34,36], but it is also involved in neuronal differentiation [19,41]. To unambiguously establish the role of this receptor in osteogenic differentiation of MSCs, we inhibited NURR1 during all the processes submitting DPSCs to a repeated multistep silencing treatment. Primarily, we checked the successful of NURR1 silencing treatment at each time of the experiment. Hence, the main osteogenic markers were studied. Col1 expression indicated that DPSCs acquired the capacity to secrete the main bone matrix protein and we found that NURR1 silencing increased both mRNA and protein expression. ALP mRNA levels dramatically increased in silenced cells and the histochemical assay confirmed the different enzyme quantities, indicating that NURR1 downregulation had a strong effect on the expression of the molecule crucial for osteoblast during the matrix deposition. The final crucial step in the bone regenerative process is the inorganic matrix formation [42]. Thus, mature osteoblasts, after the secretion of organic matrix components, begin the mineralization phase. MSCs from dental tissues have been demonstrated to correctly undergo to mineralization process [43]: some substances such as vitamin D could increase the mineral matrix deposition [44]; we found that inhibiting NURR1 enhanced DPSC mineralization. In summary, NURR1 is expressed in DPSCs, but to pursuit the cells toward a greater matrix deposition, proper of mature osteoblast, the receptor can be downregulated. MSC differentiation fate can be artificially modulated, in vitro, by the appropriate culture conditions and compounds. Apparently, the epigenetic science indicates that different stimuli can interfere with gene expression; in vivo, this issue regards also cell differentiation. In conclusion, our results showed the expression of nestin and GFAP in DPSCs confirming their neural potential. In addition, we demonstrated that such neural and glial markers are still present during the first steps of osteogenic differentiation, suggesting that DPSCs still maintain quite a multipotency or perhaps not all the cells in the culture are yet committed to osteogenic lineage. After 8 days, the expression of these markers dramatically decreased, suggesting that the cells lose their neural potential. In the same way, we found that NURR1 is expressed in DPSCs, but keeping down its expression during the osteogenic differentiation, the expression of typical osteoblastic markers is increased, culminating in higher production of mineralized matrix. We demonstrated that one of the mechanisms regulating MSC plasticity, influencing their phenotype, is NURR1 expression; in particular, its inhibition promotes osteoblastogenesis and enhances mineral matrix deposition. Discovering an appropriate in vivo method for inhibiting NURR1 during MSC osteogenic differentiation could improve an adult stem cell based tissue engineering, enhancing bone tissue regeneration. Cell Cultures. Human pulp tissues were collected from the third molars of twenty healthy young adults aged between eighteen and twenty-six years. The study was approved by the Institutional Review Board of the Department of Dental Science and Surgery-Unit of Periodontology, University of Bari; the patients gave written informed consent. Once the teeth were extracted, the pulp tissues were dissected, enzymatically digested, and filtered to obtain single-cell suspensions. DPSCs harvested were seeded and expanded as previously described [2,23,45,46]. For differentiation toward osteogenic lineage, cell culture medium was supplemented with 10 −8 M dexamethasone and 50 μg/ml ascorbic acid (Sigma Aldrich, Milan, Italy). For induction of matrix mineralization, we supplemented the cell culture medium with 10 −8 M dexamethasone, 50 μg/ml ascorbic acid, and 10 mM β-glycerophosphate. 4.2. Short-Interfering RNA Knockdown. DPSCs were transfected with NURR1-specific siRNA or scrambled sequences as control (50 nM) (Life Technologies) using RNAi Max Lipofectamine (Life Technologies). Both specific and control sequences were added on each medium change every 2 days, until the end of the culture, in order to keep the protein downregulated, reaching an optimal knockdown of NURR1 mRNA and protein (Figures 2(a) and 2(b)). 4.3. Real-Time RT-PCR. Total RNA was isolated using spin columns (RNasy, Qiagen, Hilden, Germany) according to the manufacturer's instructions and reverse transcribed (2 μg) using the Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA); the resulting cDNA (20 ng) was subjected to quantitative PCR Figure 3: Effect of NURR1 downregulation on osteoblast markers. (a)-(b) qPCR performed on si-NURR1 or CTR cells showed that NURR1 downregulation significantly increased the expression of the two osteoblast markers ALP (8 days) (a) and Col1 (6-8 days) (b). Expression was normalized to GAPDH. * P < 0 01 compared to CTR. (c) Immunoblotting confirmed that the expression of Col1 protein increased in NURR1 silenced cells relative to CTR cells ( * P < 0 01). Each graph represents means ± SE of 3 independent donors. Statistics: unpaired Student's t-test. Figure 4: Effect of NURR1 downregulation on ALP and mineralization. (a) ALP histochemical assay (purple staining) performed on DPSCs transfected with NURR1-specific siRNA or scrambled sequences and maintained in osteogenic conditions for 7 days. The graph represents the quantification of positive staining as percentage compared to CTR ( * P < 0 01) and is representative for 3 independent donors. Data are presented as mean ± SEM. Student's t-test was used for single comparisons. (b) Mineral matrix deposition assayed by ARS (red staining) in siNURR1 and CTR cells after 21 days in osteogenic conditions. The graph shows the OD quantification of extracted dye from stained cell layers as percentage compared to CTR ( * P < 0 001) and is representative for 3 independent donors. Data are presented as mean ± SEM. Student's t-test was used for single comparisons. as described. Real-time PCR analysis of mRNA was performed using a BioRad CFX96 Real Time System using the SYBR green PCR method according to the manufacturer's instruction (BioRad iScript Reverse Transcription Supermix cat. 170-8841). The mean cycle threshold value (Ct) from triplicate samples was used to calculate gene expression, and PCR products were normalized to GAPDH levels for each reaction. 4.4. Immunoblotting. Total cell lysates were obtained as previously described [44,45]. Total protein concentration was measured using the Bio-Rad Protein Assay kit, and cell lysates were separated by SDS-PAGE before transfer onto nitrocellulose membranes (Invitrogen, Carlsbad, CA). After immunoblotting with the appropriate antibodies, immune complexes were visualized by incubation with IRDye-labeled secondary antibodies (680/800CW) (LI-COR Biosciences, NE). For immunoblotting, the Odyssey infrared imaging system was used (LI-COR Corp., Lincoln, NE). Alkaline Phosphatase (ALP). The levels of the biochemical marker for the osteoblast activity, ALP, was tested in DPSC cultures differentiated with osteogenic factors, using the Leukocyte Alkaline Phosphatase Kit (Sigma Aldrich). Cells were fixed, gently washed with deionized water, and stained with ALP solution according to the manufacturer's instructions for 15 ′ . After incubation, the cells were rinsed with water, air-dried, and then analyzed under the microscope. ALP-positive cells show a purple color. ALP quantification was performed by ImageJ, analyzing the number of colored pixels corresponding to the positive stained cells. 4.6. Alizarin Red Staining (ARS). The capacity of differentiated DPSCs to produce calcium-rich deposits was analyzed by using alizarin red staining. The cells were gently rinsed with PBS, fixed with 10% formalin at room temperature for 10 minutes, and then rinsed again with deionized water. The staining was performed by adding 1% of ARS solution at room temperature for 10 minutes. After discarding the ARS solution, the wells were rinsed twice with deionized water and air-dried. Calcium-rich deposits appeared red stained. As previously described, the dye was extracted from the stained cell layer and assayed for quantification at 405 nm [46,47]. Briefly, 10% acetic acid was added for 30 min at room temperature with shaking, the solution incubated 10 min at 85°C and then kept on wet ice for 5 min. Before reading the optical density at 405 nm, 10% ammonium hydroxide was added to neutralize the acid. The results were evaluated for statistical analysis.
4,449.2
2017-07-09T00:00:00.000
[ "Biology", "Medicine" ]
Electronic structure and thermal conductance of the MASnI3/Bi2Te3 interface: a first-principles study To develop high-performance thermoelectric devices that can be created using printing technology, the interface of a composite material composed of MASnI3 and Bi2Te3, which individually show excellent thermoelectric performance, was studied based on first-principles calculations. The structural stability, electronic state, and interfacial thermal conductance of the interface between Bi2Te3 and MASnI3 were evaluated. Among the interface structure models, we found stable interface structures and revealed their specific electronic states. Around the Fermi energy, the interface structures with TeII and Bi terminations exhibited interface levels attributed to the overlapping electron densities for Bi2Te3 and MASnI3 at the interface. Calculation of the interfacial thermal conductance using the diffuse mismatch model suggested that construction of the interface between Bi2Te3 and MASnI3 could reduce the thermal conductivity. The obtained value was similar to the experimental value for the inorganic/organic interface. www.nature.com/scientificreports/ films on nylon, and the resulting material exhibited a relatively high power factor of ~ 389.7 µW/(m·K 2 ) at 418 K 27 . Kumar et al. fabricated a PEDOT:PSS/Te composite material, which reduced the thermal conductivity owing to the enhanced phonon-phonon scattering in the polymer matrix 28 . Many researchers have also studied the combination of Bi 2 Te 3 and PEDOT:PSS 8,[29][30][31][32] . Du et al. fabricated Bi 2 Te 3 based alloy nanosheet/PEDOT:PSS composite films, which exhibited high electrical conductivity (1295.21 S/cm) relative to BI 2 Te 3 -based alloy bulk materials (850-1250 S/cm), and a power factor of ~ 32.26 µW/(m·K 2 ) was obtained 30 . In a Te-Bi 2 Te 3 /PEDOT:PSS hybrid film synthesized through a solution-phase reaction at low temperature, a power factor of 60.05 µW/ (m·K 2 ) with a Seebeck coefficient of 93.63 µV/K and an electrical conductivity of 69.99 S/cm were reported by Bae et al 31 . Based on these results, it can be concluded that the electronic properties of the interface between the organic and inorganic materials play a critical role in improving the ZT of organic-inorganic hybrid materials. Here, we focus on halide perovskites instead of PEDOT:PSS to fabricate a printable thermoelectric material. The thermoelectric properties of inorganic halide perovskite (CsSnI 3 ) have previously demonstrated relatively high values as a printable thermoelectric material (ZT > 0.1 at room temperature) 33,34 . Organic-inorganic hybrid perovskites, ABX 3 (A: methylammonium cation (CH 3 NH 3 + ), B: lead or tin, X: iodide) have been investigated as candidate thermoelectric materials and are well known in the field of thin-film solar cells 35,36 . Regarding the thermoelectric properties of organic-inorganic perovskites, Pisoni et al. reported that CH 3 NH 3 PbI 3 exhibited an ultra-low thermal conductivity of 0.3-0.5 W/(m·K) at room temperature due to the slowly rotating CH 3 NH 3 + cations within the crystal structure 37 . Theoretical studies also predicted that CH 3 NH 3 PbI 3 would have a low thermal conductivity of ~ 1 W/(m·K) compared with other perovskites such as CsPbI 3 , CH 3 NH 3 Br 3 , and CH 3 NH 3 PbCl 3 [38][39][40] . On the other hand, CH 3 NH 3 SnI 3 (MASnI 3 ) is expected to exhibit low thermal conductivity compared with CH 3 NH 3 PbI 3 , with improved thermal properties obtained through chemical doping 41 . The advantage of perovskite compounds such as MASnI 3 over the organic materials PEDOT: PSS is that they have a variety of constituent elements, which enable the system elemental substitution. It is possible to change the energy level near the Fermi level, and it is expected that the electric conductivity and Seebeck coefficient will be improved. Such electronic state control can be performed more easily with perovskite than with PEDOT:PSS. In this study, we aimed to understand the interface structure of hybrid materials composed of Bi 2 Te 3 and organic-inorganic perovskite (MASnI 3 ) to improve the thermoelectric conversion properties. We previously reported the structural stability and electronic properties of different Bi 2 Te 3 (001) termination surfaces based on first-principles calculations 42 . Based on the results, we prepared three structures with different Bi 2 Te 3 termination structures and explored statically stable structures through structural optimization. Additionally, we calculated the electronic states and distribution of the charge density near the Fermi energy. The calculation of the diffuse mismatch model (DMM) 43,44 obtained from the results of phonon dispersion in Bi 2 Te 3 and MASnI 3 confirmed a decrease in the interfacial thermal conductance at the interface. Computational methods Density functional theory calculations for Bi 2 Te 3 /MASnI 3 interfaces. To create the interface structure, the crystal structure of Bi 2 Te 3 was transformed from a rhombohedral lattice to an orthorhombic lattice, and the lattice parameter of MASnI 3 was reduced to fit the lattice parameter of Bi 2 Te 3 . The interface models consisted of orthorhombic Bi 2 Te 3 (001) and tetragonal MASnI 3 (001), and a vacuum layer of ~ 15 Å was inserted. For simplicity, the termination structure of MASnI 3 was fixed as SnI 2 at the interface. For the structure of Bi 2 Te 3 in contact with MASnI 3 , three termination structures were considered: Te I , Te II , and Bi terminations, which are relatively stable surface structures that were described in our previous study 42 (Fig. 1c-e). The Vienna ab-initio simulation package (VASP) 45,46 with the projector-augmented wave method 47,48 was used for the first-principles calculations. For the exchange-correlation function, the generalized gradient approximation and Perdew-Burk-Ernzerhof function were used 49 . The cutoff energy was set at 520 eV, and structural optimization was performed using the Gaussian smearing method with a sigma value of 0.1 eV. The K-points were set at 5 × 6 × 1, and the convergence value for the structural optimization was set to 10 −3 eV. The Blöchl-corrected tetrahedron method was used for accurate calculation, and its convergence value was set at 10 −4 eV. To perform more accurate band structure, density of states (DOS), and charge distribution, we considered the spin-orbit coupling (SOC). Calculation of thermal conductance using DMM. For the thermal conductance calculation, phonon calculation of the interface structure between Bi 2 Te 3 and MASnI 3 is the most direct calculation method. However, for an interface structure, the number of atomic displacement patterns are required to obtain the highly accurate atomic force, and it is impossible to calculate by the first-principles calculation. Therefore, in this paper, we used DMM 44 , which is often used as a simple method for evaluating interfacial thermal conductance. The interfacial thermal conductance (thermal boundary conductance) obtained by the DMM is defined as the ratio of the heat current density to the temperature differential. To estimate the thermal boundary conductance for hybrid materials A/B, Reddy et al. defined the thermal boundary conductance, G, as follows: where α A→B (k, i) is the transmission probability of A to B, ω(k, i) is the phonon frequency corresponding to wave vector k and phonon mode I, and |V (k, i).n| is the group velocity along the unit vector n to the interface of A to B. Calculations of the transmission probability of A to B and the phonon frequency and group velocity of A and B obtained from phonon dispersion are required. Here, the transmission probability is calculated from the group velocities of A and B as follows: www.nature.com/scientificreports/ where K A and K B are the discretized cells of the Brillouin zones of A and B, respectively, and δ ω(k,i),ω ′ is the Kronecker delta function. Therefore, to evaluate the thermal boundary conductance with DMM, only the phonon dispersions of A and B are required. The calculated thermal conductance will be severely underestimated (by a factor of 1/2) when the transmission probability between similar materials is calculated using the DMM. Therefore, the maximum transmission model (MTM) was employed to evaluate the extreme upper limit of the thermal conductance if needed 50 . The phonon dispersions of Bi 2 Te 3 and MASnI 3 were evaluated using first-principles phonon calculations, and the group velocity was calculated from the results. To calculate the phonon dispersion, we used the finite displacement method with a displacement distance of 0.01 Å. The supercell sizes of Bi 2 Te 3 and MASnI 3 were 2 × 2 × 2 for the rhombohedral cells and 1 × 1 × 1 for the orthorhombic cells, respectively (Fig. 1a,b). We note that the longer lattice parameter of the orthorhombic cell of MASnI 3 is along the b-axis, and the a-and b-axes are rotated 45° relative to the cubic perovskite phase. To estimate the force due to the introduction of displacements, we used the VASP code with the following parameters: a plane wave energy cutoff of 400 eV, a convergence value for the electronic self-consistency loop of 10 −8 eV, Γ-point centered k-mesh limited to 0.1 Å −1 , and the Gaussian smearing method with a smearing width of 0.05 eV. In the phonon calculation, SOC does not significantly affect the phonon dispersion relation, therefore, SOC is not considered. We used the phonopy code 51 to create the displacement using the finite displacement method, and the ALAMODE 52 code for the phonon properties calculation. To obtain the phonon density of states (DOS) and group velocity for both structures, the reciprocal space was sampled using 10 × 10 × 10 meshes. Results and discussion Optimized interface structure. We constructed three interface structures with different Bi 2 Te 3 termination structures: Bi 2 Te 3 (Te I )/MASnI 3 , Bi 2 Te 3 (Te II )/MASnI 3 , and Bi 2 Te 3 (Bi)/MASnI 3 (Fig. 1). The crystal plane in contact with each structure in the interface was determined from the lowest lattice deformation ratio of various combinations of crystal planes; the selected structure of MASnI 3 was tetragonal, which was stable at room temperature. In the creation of the interface models, the lattice distance of MASnI 3 was reduced to fit that of Bi 2 Te 3 . Table 1 lists the lattice parameters of various interface models after structural optimization. The termination structure of Bi 2 Te 3 affected the lattice constant of the interface model; the Te II termination exhibited the lowest (Table 1). This result also led to a decrease in the lattice deformation ratio in the Bi 2 Te 3 (Te II )/MASnI 3 structure, as indicated in Table 2. The lattice deformation ratio was calculated using the following equation: deformation (%) = (d 2 /d 1 -1) × 100, where d 2 and d 1 represent the lattice distance of the transformed or optimized structure and the lattice distance of the bulk structure, respectively. On the other hand, the Bi 2 Te 3 (Bi)/MASnI 3 structure exhibited a high lattice deformation ratio among the three interface models; the lattice of Bi 2 Te 3 in the interface structure was particularly expanded. A low lattice deformation ratio is expected in the case of easy formation and relatively high stability of the interface structure experimentally. After structural optimization, the atoms in the interface moved strongly with an incomplete structure for Bi 2 Te 3 with the Te II and Bi termination structures, and this phenomenon was prominent for the Bi termination. This result suggests that Bi and Sn atoms can move easily into each structure, and the Te II and Bi termination structures form an interaction between Bi 2 Te 3 and MASnI 3 compared with the Te I termination. The relationship between the reconstruction of atoms in the structural optimization, lattice parameters, and lattice deformation ratio was not observed. To evaluate the interface stability between Bi 2 Te 3 and MASnI 3 , we calculated the binding energy using the following equation: where E total , E p , and E b denote the energies of Bi 2 Te 3 /MASnI 3 , the MASnI 3 (001) surface, and the Bi 2 Te 3 (001) surface, respectively (Fig. 2). E p , and E b means reference energies. For these (001) surface structures, we used the lattice constant of the ground state structure. A positive binding energy value indicates low stability of the interface structure, which makes formation of the interface difficult. Hence, Bi 2 Te 3 (Te I )/MASnI 3 is the most unstable interface structure; in contrast, Bi 2 Te 3 (Bi)/MASnI 3 is the most stable interface structure, with a binding energy of − 1.7 eV. Similar to the Bi termination model, Bi 2 Te 3 (Te II )/ Fig. 3 (corresponding band structures are also shown in Fig. S1). The valence band of each interface structure consists of Sn s-, I p-, Bi s-, Bi p-, and Te p-orbitals, and the conduction band consists of Sn p-, I p-, Bi p-, and Te p-orbitals. In the interface structures, the shapes of the partial DOS of Bi 2 Te 3 in each termination structure were similar to that of the bulk structure. However, the energy levels of the DOS changed with the termination structure in Bi 2 Te 3 , which is attributed to the difference in the ratio of Bi and Te atoms. The partial DOS of MASnI 3 in each interface structure exhibited different electronic states from the bulk structure over a range of − 0.5 eV to 0 eV; these states are attributed to the I p-orbital in MASnI 3 in contact with the vacuum layer. In the DOS around the Fermi energy in the Te II and Bi termination structures (Fig. 3b,c), the additional electronic state appeared at similar energy levels for both Bi 2 Te 3 and MASnI 3, indicating that the additional electronic state includes the contributions of both Bi 2 Te 3 and MASnI 3 . To investigate the additional electronic state, the decomposed DOS for each layer near the interface is shown in Fig. 4. The atoms included in the decomposed layer are shown in Fig. 5. The DOS for Bi, Te, Sn, and I consist of the s-and p-orbitals. The DOS for the middle layer of MASnI 3 (MASnI 3 -3L) exhibited a similar shape despite the different interface structures. However, the shapes of the DOS for Sn and Bi changed significantly near the interface, and they exhibited a different electronic state with the variation in Bi 2 Te 3 termination. In particular, on the Bi 2 Te 3 (Bi)/MASnI 3 interface, the conduction band of MASnI 3 -1L moved to near the Fermi energy. This result is attributed to the large change in the atomic positions of Sn and I at the interface. On the other hand, the shape of the DOS for Bi 2 Te 3 depended on its termination structure; in particular, it changed significantly in the layer in contact with the interface. This phenomenon originates from the different ratios of Bi and Te atoms in the incomplete Bi 2 Te 3 structure. The Bi 2 Te 3 (Bi)/MASnI 3 interface also had the potential to be affected by the movement of Bi atoms. Focusing on the first layers from the interface of Bi 2 Te 3 and MASnI 3 , the additional electronic state is observed at the same energy level in the layer near the interface between Bi 2 Te 3 and MASnI 3 . Figure 4(b) and (c) shows additional interface levels, denoted by arrows, with an overlapping electronic density appeared in both structures around the Fermi energy in the Bi 2 Te 3 (Te II )/MASnI 3 and Bi 2 Te 3 (Bi)/MASnI 3 structures. These results also suggest that the incomplete structure of Bi 2 Te 3 , such as the Te II and Bi terminations, plays an important role in the formation of interface states. The Te I termination did not produce an overlap in the DOS between Bi 2 Te 3 and MASnI 3 at the interface. The charge densities of each interface structure around the Fermi energy are shown in Fig. 6. Figure 6(a) shows the charge distribution in the interface structure with the Te I termination, which is localized at the MASnI 3 www.nature.com/scientificreports/ side, and is not observed at the interface between Bi 2 Te 3 and MASnI 3 . This suggests a decreasing affinity of Bi 2 Te 3 and MASnI 3 . On the other hand, the interface structure with the Te II termination possesses a localized charge distribution at the near interface and an overlapping charge density between Sn and Te atoms in the energy range of 0.2 to 0.5 eV (Fig. 6c). Phonon properties of Bi 2 Te 3 and MASnI 3 . Next, we estimated the interfacial thermal conductance of the Bi 2 Te 3 /MASnI 3 interface. Figure 7 shows the phonon dispersions and atomic projected phonon DOS of (a) Bi 2 Te 3 and (b) MASnI 3 . Bi 2 Te 3 exhibited low energy phonon modes below 150 cm −1 , and MASnI 3 had low (f < 120 cm −1 ) and high (f > 120 cm −1 ) energy phonon modes. Because of the difference in atomic mass, the vibrations of Sn and I appeared at low energies, and the vibrations of C, N, and H appeared at high energies. Therefore, the phonon dispersion of MASnI 3 showed a low-energy mode in the same range as Bi 2 Te 3 until approximately 150 cm −1 . Based on the phonon dispersion results for Bi 2 Te 3 and MASnI 3 , these structures are dynamically stable at T = 0 owing to the lack of observation of the imaginary mode. Our calculated phonon dispersions for both structures are similar to those reported in previous studies for Bi 2 Te 3 54 and MAPbI 3 (not MASnI 3 ) 55,56 . The group velocities of phonons are necessary for the evaluation of the interfacial thermal conductance using DMM, as shown in Eqs. (1) and (2). Figure 8 shows the absolute values of the calculated group velocities in the direction of the c-axis in Bi 2 Te 3 and a-, b-, and c-axes in MASnI 3 . The group velocity (speed of sound) was estimated from the three low-energy phonon modes within the phonon dispersion; hence, it corresponds to a gradient of the phonon dispersion. In calculated results, Bi 2 Te 3 exhibited a high group velocity at under 10 cm −1 , whereas MASnI 3 had a high group velocity above 10 cm −1 . Moreover, we found that the distribution of the group velocity with respect to the frequency differed between Bi 2 Te 3 and MASnI 3 . For MASnI 3 , the group velocity was not dependent on the direction of the crystal axis. It has been experimentally reported that Bi 2 Te 3 exhibits a group velocity of 1750 m/s based on nuclear resonant inelastic scattering 57 , whereas MAPbI 3 has exhibited group velocities of acoustic modes of 2400 or 1200 m/s based on neutron scattering 58 . Although there is a difference in the structures between our calculations (MASnI 3 ) and the previous experiments (MAPbI 3 Figure 9 shows the interfacial thermal conductance of the Bi 2 Te 3 /MASnI 3 interface; the combination of all axes of MASnI 3 and the c-axis of Bi 2 Te 3 was evaluated. The obtained values for the interfacial thermal conductance with different Bi 2 Te 3 /MASnI 3 interfaces were 1.5-2.0 MW/m 2 K. This result indicates that the interfacial thermal conductance was not affected by the orientation of MASnI 3 because the differences between different directions were small, as shown in the phonon dispersion curve (Fig. 7b). The calculated interfacial thermal conductance of Bi 2 Te 3 /MASnI 3 was lower than the calculated value of the inorganic/inorganic interface 44 . The reason for this is explained as follows: calculated phonons in Bi 2 Te 3 and MASnI 3 are distributed in the low energy region. This is due to the fact that they have relatively heavy elements and complicated structures. When the phonon dispersion is distributed in the low energy region, the group velocity becomes small. The interfacial thermal conductance calculated by DMM depends on the group velocity and phonon frequency, as shown in Eq. (1), therefore Bi 2 Te 3 /MASnI 3 interface shows a relatively low interfacial thermal conductance. Moreover, it was as low as the experimental values for inorganic/organic interfaces such as a graphene-Bi 2 Te 3 heterostructure (~ 3.46 MW/(m 2 ·K)) 59 and PEDOT:PSS-Bi 2 Te 3 heterostructure (~ 10 MW/(m 2 ·K)) 60 Therefore, these results suggest that the Bi 2 Te 3 /MASnI 3 interface has a low interfacial thermal conductance, and we expect that the application of this interface to thermoelectric materials can reduce the thermal conductivity. An extremely low thermal conductance is expected even for a stable structure at the Bi 2 Te 3 /MASnI 3 interface, although the morphological effects are not included in the DMM model. Direct numerical simulations, such as molecular dynamics, may be necessary for further discussion. Effective thermal conductivity of the Bi 2 Te 3 and MASnI 3 hybrid material. Although the actual thermal transport mechanism, such as superlattices with very short periodicity 61 , is too complex to explore here, we have aimed to discuss the effect of the interfacial thermal conductance of Bi 2 Te 3 /MASnI 3 on the effective thermal conductivity, κ using a simple composite model. Here, a one-dimensional model is used, in which Bi 2 Te 3 layers and MASnI 3 layers with a thickness of D µm are alternately arranged, as shown in the inset of Fig. 10; the parameters are the interfacial thermal conductance, ITC, and the thickness of each layer. The results indicate that the effective thermal conductivity of Bi 2 Te 3 /MASnI 3 asymptotically approaches 0.17 W/(m·K) at a film thickness sufficiently larger than 1 µm. This value is calculated from the experimental values of Bi 2 Te 3 and MASnI 3 : 2.11 W/(m·K) 62 and 0.09 W/(m·K) 41 , respectively. This upper limit does not depend on the interfacial thermal conductance because the influence of the interface is negligible in the limit of large D. In contrast, when the film thickness is sufficiently smaller than 1 µm, the interfacial thermal conductance significantly influences www.nature.com/scientificreports/ the effective thermal conductivity. The blue line in the figure shows the effective thermal conductivity estimated from the calculated interfacial thermal conductance of 1.75 MW/(m 2 ·K). The smaller the film thickness, the more effectively the interfacial thermal conductance of Bi 2 Te 3 /MASnI 3 can be utilized. Thus, it is expected that the thermal conductivity of the Bi 2 Te 3 /MASnI 3 composite, which consists of small Bi 2 Te 3 grains in MASnI 3 , will be significantly reduced. Conclusion In this study, we evaluated the stability and electronic state of interface structures of Bi 2 Te 3 (001) and MASnI 3 (001), and the thermal conductance of the interface between Bi 2 Te 3 and MASnI 3 along the (001) direction was estimated. In the structural optimization, the termination of MASnI 3 was fixed with SnI 2 at the interface and surface, whereas for the structure of Bi 2 Te 3 in contact with MASnI 3 , three termination structures were considered: Te I , Te II , and Bi termination. After structural optimization, around the Fermi energy, the interface structures with Te II and Bi termination resulted in the formation of interface levels attributed to the overlapping electron densities for both Bi 2 Te 3 and MASnI 3 at the interface. It is believed that the formation of interface levels enhances the affinity for the interface structure of Bi 2 Te 3 and MASnI 3 , and the binding energies for these interface structures are negative. Based on the calculation of the interfacial thermal conductance using DMM, it is expected that the Bi 2 Te 3 /MASnI 3 interface can significantly reduce the thermal conductivity. These results indicate that the Bi 2 Te 3 / MASnI 3 composite material is a possible candidate for an excellent thermoelectric material because it has the potential to decrease the thermal conductivity.
5,660.2
2022-01-07T00:00:00.000
[ "Materials Science" ]
Adaptation to chronic acidic extracellular pH elicits a sustained increase in lung cancer cell invasion and metastasis Acidic extracellular pH (pHe) is an important microenvironment for cancer cells. This study assessed whether adaptation to acidic pHe enhances the metastatic phenotype of tumor cells. The low metastatic variant of Lewis lung carcinoma (LLCm1) cells were subjected to stepwise acidification, establishing acidic pHe-adapted (LLCm1A) cells growing exponentially at pH 6.2. These LLCm1A cells showed increased production of matrix metalloproteinases (MMPs), including MMP-2, -3, -9, and -13, and pulmonary metastasis following injection into mouse tail veins. Although LLCm1A cells exhibited a fibroblastic shape, keratin-5 expression was increased and α-smooth muscle actin expression was reduced. Despite serial passage of these cells at pH 7.4, high invasive activity through Matrigel® was sustained for at least 28 generations. Thus, adaptation to acidic pHe resulted in a more invasive phenotype, which was sustained during passage at pH 7.4, suggesting that an acidic microenvironment at the primary tumor site is important in the acquisition of a metastatic phenotype. Electronic supplementary material The online version of this article (10.1007/s10585-019-09990-1) contains supplementary material, which is available to authorized users. Introduction Extracellular pH (pH e ) becomes acidic due to excess cellular glycolysis. In the presence of oxygen, lactic acid is the main cause of extracellular acidification, a process called the "Warburg effect" or "aerobic glycolysis" [1]. Because the expression of most glycolytic enzymes is driven by hypoxia inducible factor-1 (HIF-1), extracellular acidification is closely related to hypoxia [1]. Among lactate anion/ H + symporters, also known as monocarboxylate transporters (MCTs), the hypoxia-inducible subtype MCT4 is primarily responsible for the secretion of lactic acid. MCT4 exports lactate, thereby affecting the proliferation of tumor cells [2]. An alternative major cause of extracellular acidity in tumor tissue results from the hydration of CO 2 by tumor carbonic anhydrase IX [3,4]. HIF-1 activation in tumors up-regulates angiogenesis and/or lymphangiogenesis. These newly formed vessels provide primary tumor cells the opportunity to disseminate through the circulation [5]. Acidic pH e also induces the production of vascular endothelial cell growth factor (VEGF)-A [6], interleukin-8 (IL-8) [7], and VEGF-C [8] through an HIF-1 independent pathway. Thus, an acidic pH e microenvironment, whether independent of, in addition to, or synergistically with hypoxia, may support the malignant phenotype of cancer cells and play a role in metastasis. Tumor-derived acidic pH e can act as a feed-back stimulator of a metastatic phenotype. Our investigations of the association of acidic pH e with the metastasis-related activities of mouse B16 melanoma variants, including the induction of matrix metalloproteinase-9 (MMP-9) expression, found that MMP-9 induction correlated with the metastatic activity of B16 variants and the acceleration of tumor invasion through type IV collagen sheets [9,10]. Transient exposure to acidic pH e resulted in a switch from an epithelial to a mesenchymal phenotype, called an epithelial-mesenchymal transition (EMT) [11][12][13]. Transient acidic pH e 5.9-6.8 was found to potentiate the invasive and metastatic activities of these cells [8,12,[14][15][16][17][18][19]. In vivo mapping of pH e in mouse B16-F10 melanoma xenografts with CEST-MRI [20] showed that the pH e of most early stage tumors ranged between pH 6.0-6.2, whereas the pH e of most late stages tumors ranged between pH 5.7-6.7, with 10% of the area of late stage tumors having a pH e < 5.5. These findings suggested that primary tumors were continuously influenced by pH e 6.0-6.2 over a long period and that adaptation of tumor cells to this pH e range is an important step in tumor metastasis. Because an acidic microenvironment can chronically affect tumor cells in vivo, studies are needed to evaluate the chronic effects of pH e . Tumor cell lines have been subjected to chronic extracellular acidification and/or adaptation to pH e 6.7 for 2 weeks to 3 months [21][22][23]. We found that the growth rates of cells were equal at pH 6.8 and pH 7.4 and that these cells could grow at pH 6.5 after recovering from a transient decrease in proliferation rate. In vivo imaging showed that pH e 6.2 could be attained [20]. In this study, we established cells proliferating exponentially at pH 6.2 and investigated whether adaptation to acidic pH e increased tumor metastatic activity and whether the metastatic phenotype could be sustained at neutral pH e . Cells and cell culture A low metastatic variant of Lewis lung carcinoma (LLCm1) was established in our laboratory using an experimental lung metastasis method through tail vein injection [12]. Basal medium was prepared as described. Briefly, a 1:1 mixture of DMEM and F12 was supplemented with 15 mM HEPES, 4 mM H 3 PO 4 1.0 g/L NaHCO 3 , 100 units/mL penicillin G, and 0.1 mg/mL streptomycin sulfate, and its pH was adjusted with NaOH or HCl [14]. Cells were serially passaged with 0.05% trypsin/0.02% EDTA and cultured in the presence of 10% FBS at 37 °C in a humidified atmosphere in a 5% CO 2 incubator. Cells were adapted to acidic pH e by serial passage through media of stepwise decreasing pH (7.0, 6.8, and 6.5) until pH 6.2 was reached. The cells were maintained for 2-4 weeks at each pH and passaged 2-3 times per week, depending on growth rate. Adaptation to each pH e was confirmed by showing exponential growth after seeding cells at 2.5 × 10 5 cells/60 mm dish. Finally, acidic pH e -adapted cells (LLCm1A cells) were established by more than 40 passages (more than 3 months) through medium at pH 6.2 in the presence of 10% FBS. Where indicated, LLCm1A cells were passaged 3-10 times in medium at pH 7.4 in the presence of 10% FBS. Growth curve and doubling time Cells were suspended in medium at pH 7.4 containing 10% FBS and seeded onto 24-well plates. After 3 h, the medium was changed to medium of various pH containing 10% FBS. At this time, cells in some wells were counted and determined as the cell number at day 0. Cells were harvested using trypsin/EDTA and the number of cells in each well counted using the trypan blue dye exclusion method. Doubling time was calculated as (T 1 − T 0 )/log2 (N 1 /N 0 ), with N 0 and N 1 defined as the number of cells at the initial time (T 0 ) and after cultivation for time T (T 1 ), respectively. Lung metastasis All animal experiments were performed in accordance with the guidelines of the Ministry of Education, Culture, Sports, Science and Technology, the Ministry of Health, Labor and Welfare of Japan and ARRIVE [24]. The experimental protocols were approved by the Animal Experimental Committee of Ohu University (Koriyama, Japan) (#2014-15). LLCm1 and LLCm1A cells were harvested with trypsin/ EDTA, resuspended in DMEM/F12 (pH 7.4) containing 10% FBS, and incubated at 37 °C for 1 h. The cells were washed twice with Mg 2+ and Ca 2+ -free phosphate-buffered saline (PBS(-)) and resuspended in ice cold PBS(-). In experimental metastasis assays [12,25,26], 3 × 10 5 cells in 200 µl PBS(-) were injected into the tail vein of each 7-week-old male C57BL/6 mouse (Clea Japan, Tokyo, Japan). Each experimental group consisting of 6 mice was housed in a cage. Animals were maintained in the barrier facility for laboratory animals with a 12 h light-dark cycle and allowed 1 3 food and water ad libitum. Three weeks later, the mice were sacrificed by intraperitoneal injection of sodium pentobarbital (120 mg/kg). Their lungs were removed and the numbers of metastatic foci at lung surfaces were counted [26]. Reverse transcription-quantitative polymerase chain reaction (RT-qPCR) Total RNA was purified using the acid-guanidinium-thiocyanate-phenol-chloroform (AGPC) method and reversetranscribed to cDNA using a High-Capacity cDNA Reverse Transcription Kit. Target sequences were amplified by SYBR Premix Ex Taq II in a Thermal Cycler Dice Real Time System (TP-870, Takara Bio) using the specific primers listed in Table S1. The level of expression of each target gene was normalized relative to the level of Actb mRNA in the same samples. The data were analyzed by the 2 −ΔC t method [27], with normalized expression calculated as individual data point according to the formula: Fold gene induction = 2 −ΔC t value (Experimental group)/2 −ΔC t value (Control group). Control group: LLCm1 cells at pH 7.4. Experimental group: LLCm1 cells at pH 6.8, LLCm1A cells at pH 7.4, or LLCm1A cells at pH 6.8 Zymography MMP-2 and -9 activities were determined by gelatin-zymography, as described [9,10,12,26]. Briefly, cells were cultured in serum-free medium for 24 h. The proteins in the conditioned medium (CM) were concentrated by acetone precipitation and separated by electrophoresis in gelatincontaining 7.5% polyacrylamide-sodium dodecyl sulfate (SDS) gels, without prior heating or reduction. Loading quantity was adjusted to cell density in each experiment. After electrophoresis, the gels were washed with 2.5% Triton-X100 in Tris-HCl (pH 7.5), 5 mM NaCl to remove SDS, incubated in 50 mM Tris-HCl (pH 7.5), 10 mM CaCl 2 for 24 h at 37 °C, and stained with Coomassie Brilliant Blue R-250. Wound healing (scratch) assay Wound healing assays were performed as described [12]. Briefly, confluent cultures in 6-well plates were serumstarved for 24 h and scratched with a micropipette tip. After removal of debris, the cells were cultured in medium containing 0.2% FBS at pH 7.4 or pH 6.8. Photographs were taken at 18 h and the distance between the original edge of the wound and the front line formed by cells that had migrated was measured. In vitro invasion assay In vitro invasive activity was determined using Matrigel ® -coated polycarbonate porous filters (8 μm pores) mounted onto transwell chambers (Corning, Tewksbury, MA, USA) as described [12]. Briefly, cells were serumstarved overnight at pH 7.4 and maintained in serum-free media at pH 7.4 or pH 6.8 for 18 h. The culture medium was centrifuged, and the cell suspensions were stored at 37 °C. Adherent cells were harvested with trypsin/EDTA, incubated at 37 °C for 30 min in medium containing 10% FBS, washed twice with warmed PBS(-), re-suspended in the culture medium stored at 37 °C, and inoculated at a density of 5 × 10 5 cells/100 μl/chamber on an insert consisting of a Matrigel ® (37.9 μg/cm 2 )-coated filter. This insert had been mounted onto a well of a 24-well plate, which had been filled with 600 μl of 20% FBS-containing medium adjusted to the same pH as the chemoattractant. After incubation for 18 h, non-invasive cells were removed with a cotton swab and the invasive cells were fixed in 100% methanol, stained with Giemsa solution, and counted under a light microscope (× 200). Statistical analysis Results were expressed as mean ± SE. Two independent samples were compared by Student's t-tests, and more than two samples compared by ANOVA and the Holm method [28]. Data of in vitro assays were representative of two or more independent experiments, each of which contained triplicate samples (unless otherwise noted). P values less than 0.05 were considered statistically significant. Acidic pH e -adapted LLCm1 cells showed a fibroblastic morphology and increased metastatic activity To establish acidic pH e -adapted, or LLCm1A, cells, LLCm1 cells were conditioned by stepwise reductions in pH e , with the recovery of proliferative capacity confirmed at each pH e . Although LLCm1 cells continuously grew at pH e 6.5, they were unable to grow at pH e 6.2. A critical point was observed between pH e 6.5 and pH e 6.2. These cells were maintained at pH e 6.2 by medium renewal alone until significant growth was observed. Overall, more than 3 months were required to obtain proliferating LLCm1A cells at pH e 6.2. Acclimation involved the seeding of LLCm1 cells onto 24-well culture plates at pH e 7.4, followed 3 h later by replacement with medium at different pH; thereafter culture media were renewed every day. An obvious reduction in growth rate was not seen until pH e 6.5. However, cells showed almost no growth in medium at pH e 6.2. (Figure 1a, Table 1). If, however, cells were seeded at pH 7.4, the medium changed to a different pH after 1 day and this medium renewed every other day, the cells grew, even at pH 6.2, on day 2 (the first day of acidification) but the number of viable cells was reduced on day 3 (the second day of acidification) (Fig. S1). In contrast to parental LLCm1 cells, LLCm1A cells grew exponentially at pH e 6.8 and at pH e 6.2, although the doubling time at pH e 6.2 was slower (Fig. 1a, Table 1). Lag time was not obvious when LLCm1A cells were seeded at pH e 6.2 (Fig. S1), showing that these cells had high seeding efficiency. LLCm1A cells had a fibroblastic shape and cellto-cell contact was dispersed. In contrast, parental LLCm1 cells showed a cobblestone like morphology (Fig. 1b). Injection of LLCm1A cells subjected to 3 passages at pH e 7.4 into mouse tail veins gave rise to a greater number of lung metastases than parental LLCm1 cells (Fig. 1c). High production of matrix metalloproteinases The expression of MMPs was compared in LLCm1A and LLCm1 cells. To avoid differences in experimental conditions, both cell types were cultured at pH e 7.4. Expression of mRNAs encoding MMP-2, -3,-9, and -13 was higher in LLCm1A than in LLCm1 cells, whereas the level of Mmp14 mRNA, encoding membrane type 1 (MT1)-MMP, was lower in LLCm1A than in LLCm1 cells (Fig. 2). Adaptation to acidic pH e induces mesenchymal cell morphology and phenotype without typical mesenchymal marker expression Because LLCm1A cells had a spindle shape with little cellto-cell contact, their expression of mesenchymal and epithelial cell markers was investigated. Unexpectedly, the expression of Acta2 mRNA, encoding the mesenchymal marker αSMA, was lower and the expression of Krt5 mRNA, encoding the epithelial marker keratin-5, was higher in LLCm1A than in LLCm1 cells (Fig. 3). Although we observed a slight increase in the level of Zeb1 mRNA, the product of which reduces the expression of Cdh1 mRNA, encoding E-cadherin, Cdh1 mRNA expression was not elevated. The expression of other marker mRNAs did not differ in LLCm1 and LLCm1A cells. These findings suggest that mesenchymalepithelial transition (MET)-like changes, rather than EMT, occurred partly by adaptation to acidic pH e . Transient acidification further increases expression of MMPs Zymographic analysis of the pH e dependent secretion of MMP-2 and -9 showed that the production of both enzymes was highly enhanced at pH e 6.8 (Fig. 4a). In agreement with zymographic analysis, the expression of Mmp2 and Mmp9 mRNAs was significantly higher in LLCm1A than Fig. 1 LLCm1A cells exhibit high proliferation at pH 6.2 and have a fibroblastic cell shape and increased metastatic ability. Growth curve. a Cells in pH 7.4 medium containing 10% FBS were seeded at 8.5 × 10 4 cells/cm 2 in 24-well plates. Three hours later, the culture medium was changed to pH 7.4 (control), pH 6.8 or pH 6.2 containing 10% FBS, with the media changed every day. Viable cell numbers were determined using the trypan blue dye exclusion method. b Cells were plated at 2.0 × 10 4 cells/cm 2 in 24 well plates in pH 7.4 medium containing 10% FBS. After 24 h, the culture medium was changed to pH 7.4 (control), pH 6.8 or pH 6.2 medium containing 10% FBS and the cells maintained for 24 h. Viable cell numbers were determined using the trypan blue dye exclusion method. The arrow shows the number of cells as time zero. Representative results of two independent experiments are reported as mean ± SE (n = 3). c Morphology. LLCm1 and LLCm1A cells were plated onto plastic dishes and cultured in pH 7.4 medium containing 10% FBS for 2 days. Phase contrast micrographs were taken. Bar, 100 μm. d Metastasis. LLCm1A cells were passaged for 2 weeks in pH 7.4 medium containing 10% FBS. LLCm1 and LLCm1A cells in logarithmic growth phase at pH 7.4 were harvested and 3 × 10 5 cells were injected into the tail vein of each of six C57BL/6 mice. Three weeks later, the mice were sacrificed and the metastasized foci (shown as arrows) at the lung surfaces were counted. Arrow heads show the heart. In some cases, error bars are hidden by the data symbol due to small values. Representative results of two independent experiments are reported as mean ± SE (n = 6). *P < 0.05 in LLCm1 cells (Fig. 4b). In addition, transient acidification induced Mmp3 and Mmp13 mRNA expression. Different effects of adaptation to and transient stimulation by acidic pH e In contrast to the effects of transient acidification on MMP expression, acidification enhanced Krt5 mRNA expression in LLCm1A cells but reduced its expression in LLCm1 cells (Fig. 5a). We recently showed that TRPM5 is important for acidic pH e signaling and that high TRPM5 mRNA expression was associated with shorter survival of patients with some types of tumor [26]. Here, we investigated whether adaptation to acidic pH e increased Trpm5 mRNA expression, finding that the level of Trpm5 mRNA expression in LLCm1A cells was not affected by transient exposure to extracellular acidification (Fig. 5b). Although LLCm1 cells responded to transient acidification with an increase in Trpm5 mRNA, this level was only ≈ 15% of that in LLCm1A cells. Although LLCm1 cells responded to transient acidification with an increase in Trpm5 mRNA. LLCm1A cells show increased migration and in vitro invasion We previously showed that extracellular acidification of LLCm1 cells increased their migration and invasive activities [12]. We therefore tested the migration and Matrigel ® invasion activities of LLCm1A cells. Scratch assays clearly showed that LLCm1A cells had greater migratory activity than LLCm1 cells (Fig. 6a, b). The activity of both cells was also upregulated by transient treatment with acidic pH e . In addition, LLCm1A cells showed higher in vitro invasive activity through Matrigel ® than parental LLCm1 cells Representative results of three independent experiments are reported as mean ± SE (n = 3). *P < 0.05, **P < 0.01 (Fig. 6c), with fibroblastic morphology and invasive activity sustained after long-term passage at neutral pH e (Fig. 7). Because our study was designed to assess whether tumor cells exposed to acidic pH e have increased their metastatic phenotype even at physiological pH e , such as in blood, facilitating the formation of secondary tumors, LLCm1A cells were cultured in medium containing 10% serum at pH e 7.4 and the effects of this "switch to neutral pH e " on invasive phenotype was assessed. Unexpectedly, pH e 6.2-adapted LLCm1A cells detached within several hours and were no longer maintained in serum-free or serum-reduced (2% FBS) conditions (Fig. 2S). In contrast, these cells spread well and could be maintained in serumfree and serum-reduced (2% FBS) conditions at pH e 6.5. MMP-2 and -9 levels and invasive activity were high under acidic conditions (pH e 6.5-6.8) without switching to neutral pH (Fig. 2S). Although MMP activities were reduced as pH e increased, these activities were significantly higher than in medium at pH e 7.4. These results seemed complementary to the transient increases in MMP expression (Fig. 4) and migration/invasion (Fig. 6). Fig. 3 Expression of mesenchymal and epithelial marker mRNAs. Total RNA was purified from serum-free cultures incubated for 18 h at pH 7.4, reverse-transcribed and amplified by qPCR with specific primer sets for the mesenchymal markers N-cadherin (Cdh2), vimentin (Vim), and α-smooth muscle actin (Acta2); and the epithelial markers E-cadherin (Pdh1) and keratin5 (Krt5). Representative results of two independent experiments are reported as mean ± SE (n = 3). *P < 0.05, **P < 0.01 Adaptation to acidic pH e is not simple selection of clones able to grow at pH e 6.2 To test whether LLCm1A cells resulted from the simple clonal growth of preexisting acidic pH e resistant cells rather than adaptation to acidic pH e , parental LLCm1 cells were cloned and their growth, MMP production and invasiveness were compared at pH e 7.4 and pH e 6.8 (Fig. 8). Of the LLCm1 cell clones assayed, clone 4 had the highest growth rate at acidic pH e . Although high amounts of MMP-2 and -9 were secreted, invasive activity was limited. These results suggested that the acquisition by LLCm1A cells of invasive activity was not simple clonal selection of preexisting acidic pH e -resistant cells but was also due to the dominant growth of "acidic pH e -adapted cells". However, these findings also suggested the possibility of clonal growth of preexisting acidic pH e -resistant cells. Nevertheless, these results suggested that acidic pH e shifted the heterogeneity of tumors to the accumulation of metastatic populations in the tumor microenvironment. Discussion Metastatic activity has been associated with the tumor microenvironment, which consists of growth factors, the extracellular matrix, hypoxia, and acidic pH e . The acidic pH e surrounding tumors is caused by the tumor cells' secretion of lactic acid and CO 2 . Imaging technology has shown that tumors surrounded by pH e are heterogeneous, consisting of acid donor and recipient cells [29]. This may be reflected in their relative use of MCT types, with donor cells mainly using MCT4 to secrete lactate/H + [2] and recipient cells mainly using MCT1 to incorporate lactate/ H + [30]. Initially, we investigated the effect of transient acidic pH e on metastatic phenotype [9,26,31,32]. However, metastasis is thought to be caused by the dissemination of cells from the primary tumor, with tumor cells being affected by the tumor microenvironment including acidic pH e . This study therefore focused on the effects of adaptation to acidic pH e especially on tumor invasion and metastasis. Transient acidification induces effective but reversible effects [9,33], called the "memory effect" [33], which may be responsible for increased experimental metastasis induced by transient acidification [33,34]. This study showed that tumor cell adaptation to acidic pH e resulted in a metastatic phenotype. The high invasive activity of acidic pH e -adapted tumor cells was sustained through at least 28 serial passages (about 3 months) at neutral pH e , suggesting that the sustained invasive phenotype of these cells was likely not due to a memory effect but rather to an acquired phenotype. Thus, the acidic Migration and invasive activities are higher in LLCm1A than in LLCm1 cells, with these activities further increased by acidic pH e . Confluent cultures were scratched with micropipette tips and incubated for 18 h in medium at pH 7.4 or pH 6.8 containing 2% FBS. a Phase-contrast micrographs. b Relative migration distance relative to LLCm1 cells at pH 7.4 (n = 8). c In vitro invasion activity through Matrigel ® . Serum-starved cells were maintained in serumfree medium at pH 7.4 or 6.8. Medium was collected, and cells were harvested by trypsinization and suspended in the same own medium. Cells (5 × 10 5 ) were placed onto Matrigel ® -coated filters in transwell chambers. The chemoattractant was 20% FBS. Cells that passed through onto the lower surface of the filter were counted after Giemsa staining. In some cases, error bars are hidden by the data symbol due to small values. Representative results of two independent experiments are reported as mean ± SE (n = 3). *P < 0.05, **P < 0.01 Fig. 7 Fibroblastic morphology and high invasive activity of LLCm1A cells are sustained until late passage generation at pH 7.4. a Cells were plated onto plastic dishes and cultured in pH 7.4 medium containing 10% FBS for 2 days. Phase-contrast micrographs were taken. The number followed by P in round brackets indicates the number of cell passages. Bar, 100 μm. b Serum-starved cells were maintained in serum-free medium at pH 7.4 or 6.8. Culture medium was collected, and the cells were harvested by trypsinization and suspended in the same culture medium. Cells (5 × 10 5 ) were placed onto Matrigel ® -coated filters in transwell chambers. Cells that passed through onto the lower surface of the filter were counted after Giemsa staining. The number followed by P indicates the number of cell passages. In some cases, error bars are hidden by the data symbol due to small values. Representative results of two independent experiments are reported as mean ± SE (n = 3). *P < 0.05, **P < 0.01 pH e -mediated acquisition of metastatic phenotype can likely be sustained in the circulation in vivo. We also observed differences between cells exposed to transient acidification and those adapted to acidic pH e . Although Krt5 mRNA expression was higher in acidic pH e -adapted LLCm1A than in LLCm1 cells, it was lower in the latter cells exposed to transient acidification. In contrast, Trpm5 mRNA, which encodes a molecule involved in sensing acidic pH e and whose overexpression in patients with melanoma and gastric cancer has been associated with shorter survival [26], was not affected by transient acidification. Although transient exposure of cells to acidic pH e -induced EMT [11,12,35], acidic pH e -adapted LLCm1A cells unexpectedly showed reduced expression of Act2 mRNA, which encodes a mesenchymal marker, and increased expression of Krt5 mRNA. Our working hypothesis was that cells of primary tumors affected for a long time by acidic microenvironments metastasize through the circulation. EMT is an important step, especially for dissemination of cells from primary tumors, whereas MET is involved in the establishment of secondary tumor formation [36]. This study assessed the in vivo metastatic potential of tumor cells injected through the tail vein, an experimental lung metastasis model evaluating steps in secondary tumor formation. Therefore, this experimental design reflected a situation in which primary tumor cells that had survived and adapted to acidic pH intravasate into the circulation, which is at pH e 7.4. The acquired metastatic potential of acidic pH e -adapted tumor cells was sustained at physiological pH, with these cells playing an important role in secondary tumor formation through MET-like conversion. Transient and chronic extracellular acidification have been reported to affect metabolic pathways through epigenetic alterations, including histone acetylation and DNA methylation [18,[37][38][39]. Adaptation or, in this study, resistance to acidic pH e may also be regulated by these epigenetic alterations. Because highly proliferative cells consume glucose to generate ATP, and deoxyribose from the pentosephosphate pathway, adaptation to extracellular acidification resulted in an escape from glucose dependence [37]. Cancer stem cells (CSC) and tumor initiating cells, which are resistant to drugs and divide asymmetrically, are thought to be the origin of tumor recurrence and metastasis [40]. CSCs are likely affected by, but are not responsible for, extracellular acidification [41], suggesting that cells adapted to acidic pH e may have a partial CSC phenotype and may be a therapeutic target as much as CSCs [42]. The number of passages of cultured cells has been reported to affect tumor phenotype. Serial long-term or late passage was found to increase the metastatic activity of rat mammary adenocarcinomas [43], whereas serial passage of human pancreatic carcinomas had no effect on invasive activity [44]. Late passage was found to increase metastatic activity but not invasion through Matrigel ® [45], and late passage of human ovarian carcinoma cells increased MMP-9 but not MMP-2 expression [46]. Moreover, KRT5 mRNA expression was higher in early than in late passage cells of the human mammalian epithelial MCF10A cell line, with late passage cells having a more mesenchymal phenotype than early passage cells [47], indicating that late passage decreased the stemness of human amnion mesenchymal cells Fig. 8 Growth at pH 6.8, MMP-2 and -9 secretion, and invasion properties of LLCm1 cell clones. LLCm1 cell clones in medium containing 10% FBS at pH 7.4 were subjected to limiting dilution. a Growth activity at pH 6.8. Cells were seeded at 8.5 × 10 4 cells/cm 2 in 24-well plates in pH 7.4 medium containing 10% FBS. Three hours later, the culture medium was changed to pH 6.8 medium containing 10% FBS and cells were further cultured for 24 h. Viable cell numbers were determined using the trypan blue dye exclusion method. Data expressed relative to the growth rate of parental LLCm1 cells. b Zymography. Cells pre-incubated with serum-free medium at pH 7.4 for 18 h were incubated in serum-free medium at pH 7.4 or 6.8 for an additional 24 h. MMPs in the CM were collected, concentrated by acetone precipitation, and analyzed by gelatin-zymography. c Invasion. Serum-starved cells were maintained in serum-free medium at pH 7.4 or 6.8. Culture medium was collected, and the cells were harvested by trypsinization and suspended in the same culture medium. Cells (5 × 10 5 ) were placed onto Matrigel ® -coated filters in transwell chambers. Cells that passed through onto the lower surface of the filter were counted after Giemsa staining. Representative results of two independent experiments are reported as mean ± SE (n = 3). *P < 0.05, **P < 0.01, ***P < 0.001 [48]. In the present study, LLCm1A cells were derived from parental LLCm1 cells. These parental cells were serially passaged in our laboratory and showed a stable phenotype, as assessed by morphology, MMP production, in vitro invasiveness and experimental metastasis. These activities were not increased by serial passage, in contrast to previous findings [12]. Moreover, tumor cell growth was extremely slow during adaptation to acid pH, but recovered after acidification, with adapted cells showing exponential growth without lag time just after seeding. Because a study of LLC cells found that the metastatic heterogeneity of tumors already pre-existed [49], we evaluated the heterogeneity of MMP production, invasiveness and growth potential at acidic pH e . Despite having growth potential at acidic pH e with high MMP production, LLCm1 cell clone 4 did not have invasive activity, suggesting that the acquisition of invasive and metastatic ability is likely due not only to a simple effect of serial passage, but to adaptation to acidic pH e . Because our experiments could not completely distinguish between simple clonal selection and adaptation to acidic pH e , both remain possible. Our results showed, however, that acidic pH e altered the tumor microenvironment, shifting tumor heterogeneity to the accumulation of a metastatic population. Because acidic pH e was reported to induce the expression of sterol regulatory element-binding protein 2 (SREBP2) in pancreatic cancer cells [18], lipid homeostasis may regulate tumor metastasis in acidic microenvironments. In conclusion, these findings suggest that prolonged tumor cell acidification induced a sustained invasive phenotype through a mechanism differing from that resulting from transient exposure to acidic pH e . Author contributions This study is part of SS's Ph.D. thesis at Ohu University Graduate School of Dentistry, Koriyama, Japan. SS performed experiments and data analysis as major contributions to this manuscript. YK planned, designed, and supervised all experiments. TM and AS supported molecular biological and animal studies, respectively. SS wrote the manuscript, which was proofread by YK, TM, and AS. All authors approved submission of the final manuscript. Funding This work was partly supported by JSPS KAKENHI Grant Numbers JP16K11517 and 19K10074 (to YK) and 17K11885 (to AS). Data availability The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interests. Ethical approval All animal experiments were performed in accordance with the guidelines of the Ministry of Education, Culture, Sports, Sci-ence and Technology and the Ministry of Health, Labor and Welfare of Japan and ARRIVE. The experimental protocols were approved by the Animal Experimental Committee of Ohu University (Koriyama, Japan) (#2014-15). Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
7,393.4
2019-09-05T00:00:00.000
[ "Biology" ]
Counterintuitive Electrostatics upon Metal Ion Coordination to a Receptor with Two Homotopic Binding Sites The consecutive binding of two potassium ions to a bis(18-crown-6) analogue of Tröger’s base (BCETB) in water was studied by isothermal titration calorimetry using four different salts, KCl, KI, KSCN, and K2SO4. A counterintuitive result was observed: the enthalpy change associated with the binding of the second ion is more negative than that of the first (ΔHbind,2° < ΔHbind,1°). This remarkable finding is supported by continuum electrostatic theory as well as by atomic scale replica exchange molecular dynamics simulations, where the latter robustly reproduces experimental trends for all simulated salts, KCl, KI, and KSCN, using multiple force fields. While an enthalpic K+–K+attraction in water poses a small, but fundamentally important, contribution to the overall interaction, the probability of the collapsed conformation (COL) of BCETB, where both crown ether moieties (CEs) of BCETB are bent in toward the cavity, was found to increase successively upon binding of the first and second potassium ions. The promotion of the COL conformation reveals favorable intrinsic interactions between the potassium coordinated CEs, which further contribute to the observation that ΔHbind,2° < ΔHbind,1°. While the observed trend is independent of the counterion, the origin of the significantly larger magnitude of the difference ΔHbind,2° – ΔHbind,1° observed experimentally for KSCN was studied in light of the weaker hydration of the thiocyanate anion, resulting in an enrichment of thiocyanate ions close to BCETB compared to the other studied counterions. ■ INTRODUCTION Receptors are proteins located inside or on the surface of cells that can receive and transduce chemical signals. Upon binding of an external ligand, a conformational change is triggered in the receptor, which in turn activates a physiological function. 1 Due to the abundance and importance of receptors in biological systems, the development and application of synthetic receptors has received much attention. One objective for the development of synthetic receptors is to provide means to systematically study the fundamental thermodynamic factors governing receptor/ligand associations. 2 This includes the investigation of concepts such as entropy−enthalpy compensation 3,4 and binding cooperativity, 5 concepts of fundamental importance in biological receptors. One recognition motif commonly utilized in synthetic receptors is the crown ether (CE). Crown ethers are macrocyclic oligomers of ethylene oxide capable of selectively binding cations. 6 By altering the size of the cavity, CEs can be tuned to recognize ions of a certain size, and, as a result, they have found applications in many areas within host−guest chemistry, including the design of ion-selective electrodes, 7−9 recovery of cesium from nuclear waste, 10,11 and drug delivery. 12−14 While the majority of the studies of CE-based synthetic receptors concern receptors containing a single CE, we herein present a study of the thermodynamics governing the binding of potassium ions to a ditopic bis (18-crown-6) analogue of Troger's base (BCETB, Figure 1). 15,16 The association of multiple cations to a single, multitopic receptor represents an interesting thermodynamic system, with many different factors contributing to the overall stability of the complex. Previous studies of host−guest complexes containing two metal cations have employed interaction models including cation−cation repulsion and solvation effects to explain experimental observations. 17−19 Overall, considering also the interactions within the complex, the absolute binding affinity of the cation relies on a balance between the desolvation of the ion and the binding site and the subsequent formation of intermolecular interactions when the ion is fixed in the binding site, involving both interactions within the complex and interactions between the complex and the solvent. We have here estimated the standard free energies and enthalpies for the consecutive binding of two potassium ions to BCETB using isothermal titration calorimetry (ITC). 20 The ITC experiments were performed using KCl, KI, KSCN, and K 2 SO 4 in order to account for the possible influence of the counterion on the binding thermodynamics. To elucidate the origin of the observed difference in the binding enthalpy of the first and second potassium ion, we have employed continuum electrostatic theory and replica exchange molecular dynamics (REMD) simulations. 21 ■ RESULTS AND DISCUSSION Isothermal Titration Calorimetry. Figure 2 shows the heat flow diagram and the normalized, integrated heats obtained from the ITC experiments with potassium chloride (the corresponding experimental data and fits from addition of solutions of potassium iodide, potassium thiocyanate, and potassium sulfate to BCETB in water are reported in the Supporing Information). Two different binding models were employed: (i) the sequential binding sites model and (ii) the single set of identical sites model. 22 In the former model (i), two binding constants are estimated; K 1 defines the equilibrium between the states with zero and one potassium ion bound, whereas K 2 corresponds to the binding of a second potassium ion. In the latter model (ii), only one binding constant, K, is defined, corresponding to the binding of a potassium ion to either of the sites regardless of whether there is another ion already bound to the other site. To determine the goodness of fit, the reduced chi-squared statistic, χ 2 /υ, was calculated, which gives the ratio of the fitting error and the measurement error (see the Supporting Information). The sequential binding sites model fits the data significantly better than the single sites set of the identical sites model, as evident from the smaller value of χ 2 /υ for the former (χ 2 /υ = 12.02 compared to χ 2 /υ = 318.80). The introduction of more parameters in a model is associated with the risk of overfitting, which is implied by a value of χ 2 /υ significantly less than unity. 25 This was not observed for either of the models, but due to the significantly smaller value of χ 2 /υ, we chose to describe the binding of potassium ions to BCETB using the sequential binding sites model. The corresponding fits applied to systems with three other potassium salts (KI, KSCN, and K 2 SO 4 ) are reported in the Supporting Information. The binding constants K 1 and K 2 obtained from the ITC experiments with BCETB for each of the potassium salts are presented in Table 1. In general, the estimated binding constants for the binding of the first potassium ion (K 1 ) are similar to reported binding constants for the binding between 18-crown-6 and potassium (107 ± 25 26 and 138 ± 7 27 ). The binding constants for the binding of the second potassium ion (K 2 ) are lower than would be expected for a ditopic receptor with two identical noninteracting binding sites. The binding constants describing the consecutive binding of several ligands to a receptor are related partly through statistical factors that depend on the total number of binding sites and the number of occupied binding sites (see the Supporting Information). 28,29 For a receptor with two binding sites, this statistical factor gives that the binding constant for the binding of the second ligand should be four times lower than the binding of the first (referred to as statistical binding). If the relationship between the two binding constants deviates from this ratio, the binding sites are either not identical or behave cooperatively (i.e., the binding of one ligand influences the binding affinity for the second ligand). In our case, the binding constants for the binding of the second potassium ion (K 2 ) are lower than the statistical binding, which indicates negative cooperativity. This is also expressed by the calculated stepwise cooperativity parameters (ρ, see Table 1), which are defined so that ρ < 1 means that the binding sites exhibit negative cooperativity. 23,24 The standard free energies and binding enthalpies estimated from the ITC experiments are shown in Table 2 and Table 3, respectively. While the binding of the first potassium ion has a more negative free energy than the second (ΔΔG bind°> 0), the binding of the second ion is, surprisingly, enthalpically favored (ΔΔH bind°< 0). In contrast to the rather small difference in the binding free energy of the first and second potassium ion (1.12−1.26 kcal/mol), the difference in their binding enthalpies is surprisingly large in magnitude, ranging from −2.64 to −5.27 kcal/mol, depending on the counterion. Part of the less favorable binding free energy for the second potassium ion can be attributed to the before-mentioned statistical factor ( Figure S4, Supporting Information). At room temperature, this effect contributes with RT ln 4 ≈ 0.8 kcal/ mol to ΔΔG bind°. For the ITC experiments performed with potassium chloride, TΔΔS bind°= ΔΔH bind°− ΔΔG bind°= −3.78 kcal/mol, meaning that the process of binding the second potassium ion is associated with an entropy decrease 3.78 kcal/ mol larger than that of the first. Of these 3.78 kcal/mol, 0.8 kcal/mol is purely statistical, whereas the rest must be a consequence of additional, system-specific entropic losses Journal of the American Chemical Society pubs.acs.org/JACS Article upon binding (e.g., reduced flexibility of the complex or increased ordering of water in the solvation shells). This is in line with previous examples found in the literature, where negative cooperativity is usually found to be mainly entropydriven (i.e., the binding of ligands results in the loss of configurational entropy). 30 The trends that ΔΔG bind°> 0 and ΔΔH bind°< 0 are reproduced for all of the potassium salts (Tables 2 and 3), indicating that the observed thermodynamic trends are independent of the counterion. The absolute values of ΔΔG bind°a re similar for all potassium salts. However, whereas potassium chloride, potassium iodide, and potassium sulfate show similar quantitative values of ΔΔH bind°, the experiment with potassium thiocyanate shows an even more pronounced negative ΔΔH bind°( −5.27 kcal/mol compared to −2.64 kcal/ mol for potassium chloride). One potential explanation for the observed differences could be the difference in hydration, where the thiocyanate ion is weakly hydrated in water and is known to possess a higher affinity for apolar surfaces compared to the more strongly hydrated chloride, iodide, and sulfate ions. 31,32 The pronounced enrichment of thiocyanate ions around the receptor compared to the other counterions could be expected to stabilize the positive charge that is accumulated in the complex as the potassium ions bind to the receptor. However, to fully elucidate the effect of anion accumulation around BCETB on the magnitude of the negative ΔΔH bind°, detailed analyses of where on BCETB the anions accumulate and of the type of interactions causing this accumulation are needed. Due to the poor convergence of such properties resulting from the low salt concentrations in the systems studied herein, these types of analyses were not feasible and are thus left for future studies. Since the thermodynamic trends appear to be independent of the counterion, and we are mainly interested in the origin of the negative ΔΔH bind°v alue, the simulations and discussion presented below are mainly focused on the binding between BCETB and one of the salts, KCl. Continuum Electrostatic Theory. The consecutive binding of cationic metal ions to a single, ditopic receptor entails a contribution from the free energy of interaction between the cations, G ++ (r, T), limiting the stability of the final complex. Intuitively, this interaction is repulsive for any (finite) cation separation, r, and can be preliminarily estimated using Coulomb's law, which for two monovalent ions of equal charge yields where e is the elementary charge, N Av is the Avogadro constant, ϵ 0 is the vacuum permittivity, ϵ r is the relative dielectric constant of the solvent, and r is the ion separation. The relative dielectric constant, ϵ r (T), contains all rotational and translational degrees of freedom of the solvent and is therefore a temperature-dependent quantity, 33 whereby G ++ (r, T) should be regarded as a free energy. The enthalpy of interaction (for full derivation, see the Supporting Information) is hence given as 34 and is thus an average measure of how the dielectric properties of water change with temperature. Hence, the model captures the temperature dependence of the water-mediated electrostatic interaction between two charges in the solution bulk. G ++ (⟨r⟩) and H ++ (⟨r⟩), calculated according to eqs 1 and 2 with ⟨r⟩ = 11.5 Å (the average distance between the potassium Table 2 and Table 3). However, the magnitudes of the thermodynamic parameters are underestimated compared to the experimentally estimated values. The positive sign of ΔΔG bind°( G ++ (⟨r⟩)) is due to the entropic penalty of bringing the two potassium ions closer (from infinite separation to the average distance between the binding sites in BCETB). The model further correctly predicts a negative, however small, enthalpic contribution (H ++ (⟨r⟩) = ΔΔH bind°= −0.14 kcal/mol) to the difference in the binding free energies. The negative sign is a pure consequence of the dielectric property of water ( + < The original binding constants in units of M −1 have been normalized with the standard concentration 1 M yielding the dimensionless binding constants, K 1 and K 2 , making them directly related to the standard binding free energies through ΔG bind°= −RT ln K (see Supporting Information). The stepwise cooperativity parameter is calculated according to ρ = 4β 2 /β 1 2 , where β 1 = K 1 and β 2 = K 1 K 2 . 23,24 c The parameters estimated from ITC experiments are based on the normalized, integrated heats averaged over three replicas of each experiment. The associated errors are the standard deviations of the parameters generated with each replica. The difference in the standard binding free energies is calculated according to ΔΔG bind°= ΔG bind,2°− ΔG bind,1°w ith the exception of continuum electrostatic theory, where ΔΔG bind°= G ++ . b The parameters estimated from ITC experiments are based on the normalized, integrated heats averaged over three replicas of each experiment. The associated errors are the standard deviations of the parameters generated with each replica. The difference in the standard binding enthalpies is calculated according to ΔΔH bind°= ΔH bind, 2°− ΔH bind,1°w ith the exception of continuum electrostatic theory, where ΔΔH bind°= H ++ . b The parameters estimated from ITC experiments are based on the normalized, integrated heats averaged over three replicas of each experiment. The associated errors are the standard deviations of the parameters generated with each replica. . Continuum electrostatics also predict a decrease in entropy as the two cations approach each other: − TΔΔS bind°= ΔΔG bind°− ΔΔH bind°= 0.51 kcal/mol (from Table 2 and Table 3). Although small, the enthalpy and entropy changes upon bringing the potassium ions from infinite separation (or bulk) to their average separation in BCETB (ΔΔH bind°= −0.14 kcal/ mol and − TΔΔS bind°= 0.51 kcal/mol) can explain parts of the experimentally estimated values (for KCl, ΔΔH bind°= −2.64 kcal/mol and −TΔΔS bind°= 3.78 kcal/mol) Replica Exchange Molecular Dynamics. To gain a more detailed, molecular understanding of the mechanisms underlying the binding thermodynamics, we performed REMD simulations. Due to the lack of reliable force fields for divalent ions such as the sulfate ion, we included only the monovalent ions (chloride, iodide, and thiocyanate) in the simulation studies. For the system with BCETB and potassium chloride, we studied the conformational dynamics of BCETB, which enabled us to investigate the role of BCETB in the binding process. In addition, we have analyzed the water in the vicinity of the binding sites to elucidate its structural response to the binding of K + . Finally, to gain insight into the impact of the counterion, we have computed the relative affinities of chloride, iodide, and thiocyanate to BCETB. For the purpose of the REMD simulations, starting from the OPLS-AA force field, 35 we developed two different force fields, one using partial charges on BCETB according to the assigned OPLS-AA atom types and the other using charges determined from density functional theory (DFT) 36 calculations (Table S2, Supporting Information). By employing these two different force fields, we enable a more robust comparison between experiments and simulations while simultaneously probing the impact of the partial charges on BCETB on the determined binding free energies and enthalpies. The force fields were validated by comparing the differences ΔΔG bind°= ΔG bind,2°− ΔG bind,1°a nd ΔΔH bind°= ΔH bind,2°− ΔH bind,1°o btained from simulation with those estimated from ITC experiments, with emphasis on the agreement in the latter. The best-performing force field was subsequently used to analyze changes in the conformational ensemble of BCETB and the solvent response upon binding of the first and second potassium ion. In addition, we tested a force field where the charges on BCETB were determined from DFT calculations, but for the bound states the charges for the bound potassium ion(s) and BCETB were determined simultaneously. This approach allows for charge transfer between the BCETB and bound potassium ion(s) and gives three sets of charges for the different complexes: the free BCETB, BCETB with one potassium ion bound, and BCETB with two potassium ions bound. By employing these charges in REMD simulations, we tested the effect of polarization of the complex upon the sequential binding of two potassium ions. In Figure 3, simulated values of ln K = −ΔG bind,i°/ (RT) for the system with potassium chloride, obtained with the force field using DFT charges, are plotted against 1/T, where i denotes the number in the series of the two binding events and the green and purple colors correspond to i = 1 (first K + ) and i = 2 (second K + ), respectively. To account for the effect related to the difference in the degree of degeneracy for the states with zero, one, and two K + bound, these free energies have been corrected with the factors RT ln 1/2 and RT ln 2, respectively, to obtain the standard binding free energies. For the relative standard binding free energy (ΔΔG°), this degeneracy effect contributes with a factor RT ln 4 (see Supporting Information). The slope of the linear least-squares fit obtained gives −ΔH bind,i°/ R according to the linear van't Hoff equation. 37 The binding constants, standard binding free energies, and standard binding enthalpies determined from the REMD simulations are presented in Tables 1−3. For the system containing BCETB and potassium chloride, the thermodynamic parameters were determined using each of the three different force fields: OPLS-AA, DFT charges, and DFT charges (pol.). In general, both the force field using OPLS-AA and DFT charges reproduce the experimentally estimated trends in the thermodynamic parameters (ΔG bind,1°, ΔG bind,2°, ΔH bind,1°, ΔH bind,2°, and ΔΔH bind°< 0, whereas ΔΔG bind°> 0, Tables 2 and 3). For the force field using polarized DFT charges on BCETB (DFT charges (pol.)), the agreement with experimental values is poor, with inversed signs for all thermodynamic parameters except ΔΔG bind°, which in turn is significantly overestimated (66.0 kcal/mol compared to 1.14 kcal/mol from experiments, Table 2). Hence, we chose to exclude this force field in further analysis and discussion. The inversed signs of the binding free energies and enthalpies might be a result of the net positive charge accumulated in BCETB during the charge transfer from the bound potassium ions (Table S3, Supporting Information), introducing repulsion between the receptor and bound ions. This is also reflected in the large binding site volume determined from the meansquared positional fluctuation of the potassium ion in the binding site (Table S5, Supporting Information). We anticipate that to more correctly include the effect of For the other simulated salts (KI and KSCN), only the force field using DFT charges was employed due to its better performance in determining ΔΔH bind°f or KCl (−1.7 kcal/mol compared to −0.6 kcal/mol from the force field using OPLS-AA charges, Table 3). For all of the three tested force fields, the binding free energy predictions are associated with fairly low standard deviations (0.1−0.2 kcal/mol), whereas the binding enthalpies show higher uncertainties (1.6−3.1 kcal/ mol), resulting from the amplification of errors when performing the least-squares fit to obtain ΔH bind,i°f rom the slope (Figure 3). The enthalpy predictions from simulation should thus be treated as qualitative rather than a quantitative predictions. As seen in Table 3, although the values of ΔH bind,1°, ΔH bind,2°, and ΔΔH bind°d etermined from the force field using DFT charges are underestimated compared to our experiments, the experimental trends are reproduced in terms of correct signs and orders of magnitude, for all simulated salts. Perturbation in Conformational Space. It has previously been shown that 18-crown-6 itself predominantly adopts four distinct conformations. 38 To elucidate the conformational dynamics of BCETB, we performed principal component analyses (PCA) on the simulated conformations of the free BCETB and the complexes with one and two potassium ions obtained with the force field using DFT charges. Resulting from the PCAs performed on the simulated conformations of BCETB in the system with potassium chloride, four well-defined conformations separated by notable energy barriers are distinguished. These conformations are characterized by having the CEs either bent out from the cavity or in toward the cavity ( Figure S11). One of the conformations is more extended with both CEs bent out (EXT), whereas another appears collapsed with both CEs bent in (COL). The two other conformations appear as skewed, with one CE bent out and one bent in (SK1 and SK2). Due to the C 2 symmetry of BCETB, the SK1 and SK2 conformations are identical (and hence the same conformation) for the free BCETB and the complex with two potassium ions bound. However, for the complex with one potassium ion bound, the C 2 symmetry is broken. In this case, SK1 is defined as the conformation where the potassium ion is bound to the CE that is bent out, and SK2 is defined as the conformation where the potassium ion is bound to the CE that is bent in toward the cavity. The probabilities for each conformation (p EXT , p COL , p SK1 , and p SK2 ) were obtained by integrating the sampled points in the PCA space within each of the drawn ellipses enclosing the minima ( Figure S10). These probabilities are illustrated in Figure 4, showing that with no potassium ions bound, BCETB adopts the EXT conformation with highest probability, whereas the COL conformation exhibits the lowest probability among the four conformations. The conformations with one CE bent in and the other bent out (SK1 and SK2) exhibit intermediate and overlapping probabilities. Due to the C 2 symmetry of BCETB, this is expected in the case of the free BCETB and the complex with two potassium ions bound. Surprisingly, the probabilities of the SK1 and SK2 conformations are found to be equal also for the complex with one potassium ion bound. Any difference in the solvation free energy of these conformations would favor one of them. On the other hand, the binding site separation is equal for these conformations, which implies that the interaction between the binding sites is not altered much when the complex switches from the SK1 to the SK2 conformation. The probability of the COL conformation increases successively upon binding of the first and second potassium ion. The stabilization of the conformation indicates that there is some favorable interaction introduced when the ions bind, either between water and BCETB in its collapsed state (i.e., increased solvation of this conformation upon binding) or between the binding sites. The relatively large magnitudes of the experimentally estimated and the simulated binding enthalpies (Table 3) indicate strong attractive interactions between the potassium ion and the CE moiety of BCETB. It is thus plausible that the increased probability of the collapsed state is due to additional, stabilizing interactions introduced between the bound potassium ion and the opposite, free CE when the binding sites come closer. The additional increase of the probability of the collapsed conformer when the second K + binds indicates further stabilizing interactions between the two CE·K + groups. This is in line with the prediction from continuum electrostatics, and the seemingly attractive CE·K + −CE·K + interaction could be responsible for an additional contribution to the large negative ΔΔH bind°o bserved in the ITC experiments. Even though the predictions of ΔΔH bind°f rom simulation show large overlapping errors (Table 3) and should be evaluated with care, the force field using DFT charges predicts a more negative ΔΔH bind°c ompared to the force field using OPLS-AA charges, and a plausible explanation could be the more favorable CE·K + −CE·K + interactions depicted by the former. The force field using DFT charges entails (on average) more negative charges on the oxygens in the CEs, which in turn can explain the more favorable CE·K + −CE·K + interactions. Internal Energies of the Complexes and the Conformations. Any enthalpy change observed in experiments is related to the internal energy change of the studied system, ΔU sys , through ΔH sys = ΔU sys + Δ(PV), where P and V are the pressure and volume of the system, respectively. From the PCA analysis, it is apparent that the binding of potassium ions to BCETB favors the COL conformation, whereas the probability of the EXT conformation decreases upon binding Journal of the American Chemical Society pubs.acs.org/JACS Article of the first potassium ion (Figure 4). To isolate the contribution from internal energies within the BCETB complexes (U 0 , U 1 , and U 2 for the free BCETB and the complexes with one and two potassium ions bound, respectively) to ΔΔH°, we have analyzed the changes in internal potential energies of the complexes upon coordinating a first and a second potassium ion. The internal potential energies are calculated as average energies of the simulated configurations generated in the REMD simulations, excluding water and counterions. These energies thus include the bonded and nonbonded interactions within BCETB (including the interaction between the binding sites) and the nonbonded interactions between BCETB and bound potassium ions. The bonded interactions include harmonic bonds, angular potentials, and dihedral potentials in BCETB, whereas the nonbonded potentials include Coulomb electrostatics and Lennard-Jones interactions, as defined in the simulations. In Figure 5, internal potential energies (U 0 , U 1 , and U 2 ) of the complexes with no, one, and two potassium ions bound using DFT charges are presented. The top plot (A) shows that both the energy of adding a first potassium ion (ΔU 0−1 = U 1 − U 0 ) and the energy of adding a second potassium ion (ΔU 1−2 = U 2 − U 1 ) are negative with similar magnitude. The relatively large magnitude (∼100 kcal/mol) is a result of the strong, attractive CE−K + interactions formed when introducing a potassium ion in the binding site. In the bottom plot ( Figure 5B), ΔU 0−1 and ΔU 1−2 are plotted together, showing the difference in the change in internal energy upon fixation of a second potassium ion compared to that of the first (ΔΔU = ΔU 1−2 − ΔU 0−1 ). This can be interpreted as a measure of the contribution to the potential energy solely from the interaction between the sites, CE·K + −CE·K + , and the negative ΔΔU (Table 4) indicates that favorable interactions between the sites are contributing to the overall negative ΔΔH°that is observed in both experiment and simulations (Table 3). While the same result is predicted by continuum electrostatics (H ++ < 0) where the enthalpic stabilization is a result of the dielectric properties of water ( + < ∂ ϵ ∂ 1 0 T ln ln r ) and thus is a solvent effect, we here exclude the water and thus calculate the interaction energy within the complex in vacuo. The enthalpy of interaction between two positive charges in vacuo is positive, and thus the negative sign of ΔΔU must be a result of attractive interactions within the complex, overcompensating for the electrostatic ion−ion repulsion. To understand the origin of the negative ΔΔU, we decomposed it into contributions from the four different conformations (SK2, EXT, SK1, and COL). Figure 6 shows the product p j U j for the different conformations with no, one, and two bound K + obtained using DFT charges, where p j is the probability of conformation j obtained from the PCA analysis and U j is the average internal potential energy of conformation j. In this way, the internal potential energy of each conformation and bound state is weighted with its corresponding probability in order to quantify its contribution to the internal potential energy of that bound state: U i = ∑ j p i,j U i,j , where i denotes the bound state (free BCETB, one K + bound, or two K + bound) and j denotes the conformation (SK2, EXT, SK1, COL, or the rest). In the same way, the contributions to ΔU 0−1 , ΔU 1−2 , and ΔΔU were calculated by expressing them as the sums ∑ j Δ(p j U j ) 0−1 , ∑ j Δ(p j U j ) 1−2 , and ∑ j ΔΔ(p j U j ), respectively. Figure 6 shows that the product p j U j decreases upon binding of both a first and a second potassium ion for all conformations j. However, the magnitude of the decrease differs depending on conformation and bound state. The largest differences are observed for the EXT and COL conformations, where the former shows Δ(p EXT U EXT ) 1−2 > Δ(p EXT U EXT ) 0−1 , whereas the latter shows the opposite: The differences observed between the different conformations were quantified by calculating ΔΔ(p j U j ) = Δ(p j U j ) 1−2 − Δ(p j U j ) 0−1 . These are depicted as arrows in Figure 7A, and the running sum, ∑ j ΔΔ(p j U j ), is the sum of arrows as a function of the conformations accounted for. All conformations except the COL conformation yield positive contributions to ΔΔU (indicated by green arrows), and the EXT conformation shows a particularly large contribution (ΔΔ(p EXT U EXT ) = 14.1 kcal/ mol). On the contrary, the COL conformation is responsible for a large negative contribution (ΔΔ(p COL U COL ) = −15.1 kcal/mol), which overcompensates for all of the positive contributions from the other conformations and makes ΔΔU negative ( Figure 7B). Solvent Response. As previously discussed, continuum electrostatics predict that the presence of solvent results in an enthalpic attraction between two cations in water. To gain further insight into the effect of solvent reorganization upon binding, we have analyzed the solvation shell correlations between the two binding sites. Figure 8 shows the distributions of angles between the dipole moment of water molecules in the solvation shells Journal of the American Chemical Society pubs.acs.org/JACS Article around each binding site, calculated using the force field with DFT charges. The clusters are defined as the four closest water molecules around each binding site. Some changes can be identified; upon binding of the first potassium ion, the distribution is slightly shifted toward smaller angles between the dipoles of the water clusters. However, these changes are subtle and the average angle between the dipoles varies only little (96°, 89°, and 90°for the states with no K + , one K + , and two K + bound, respectively). Thus, alignment of the solvation water around the two sites upon binding of the first potassium ion is likely a minor contribution to the enthalpically more favorable binding of the second potassium ion. We performed the same analysis on larger clusters (8 and 12 water molecules), again with subtle correlations between the two solvation shells (see Figure S12 in the Supporting Information). This analysis is congruent with the fact that the negative ΔΔH ++ (r) predicted by continuum electrostatics is small. The small contribution is expected since the average distance between the sites in BCETB, ⟨r⟩ = 11.5 Å, is significantly larger than the Bjerrum length, λ B , which is the distance at which the electrostatic energy between two charges is comparable to the thermal energy, k B T, where k B is the Boltzmann constant (λ B = e 2 /(4πε 0 ε r k B T) ≈ 7 Å for water at 25°C). Affinity of Anions to BCETB. Experimentally, the system with potassium thiocyanate yielded a significantly more Figure 6. Measures of the probability-weighted potential energy, pU, for each conformation with zero, one, and two potassium ions bound using DFT charges for the system with potassium chloride. The differences in pU upon binding of a first and a second K + are plotted as arrows. The annotations within parentheses included in each plot indicate the conformation. . Probability density of angles between the dipoles of two water clusters, one at each binding site, using DFT charges for the system with potassium chloride. Each cluster is defined as the four closest water molecules to the binding site. The range of angles on the x-axis has been binned into subranges, where each marker represents the probability density within angles ±15°from the x-position of the marker (e.g., the leftmost markers show the probability densities of angles between 0°and 30°). The lines connecting the markers are merely guides, and the shaded areas show the interpolated errors between the probability densities for each subrange. Journal of the American Chemical Society pubs.acs.org/JACS Article negative ΔΔH bind°v alue compared to the systems with the other potassium salts (Table 3). This indicates that there are specific counterion effects involved in the binding of KSCN to BCETB, where the thiocyanate anion enthalpically favors the binding of a second potassium ion compared to when chloride, iodide, or sulfate is the counterion. In order to further investigate this observation, the distributions of the counterions around the BCETB surface were analyzed. Figure 9 shows local/bulk partition coefficients of the different counterions with respect to the BCETB surface defined as 39 Here, d is the distance from the BCETB surface, ⟨n surf,ion (d)⟩ and ⟨n surf,water (d)⟩ are the average numbers of counterions and water molecules, respectively, populating a region within [0, d] from the BCETB surface, whereas N ion and N water are the total numbers of counterions and water molecules, respectively, in the simulation box. For thiocyanate, we calculated K p (d) for the individual atoms separately, to get insight into the relative affinities of the sulfur, nitrogen, and carbon atoms to BCETB. For water, the oxygen atom was chosen as the reference atom. Figure 9 shows that while all counterions are enriched around BCETB compared to water, the atoms in thiocyanate accumulate more than chloride and iodide. The average distance from the atoms in thiocyanate to BCETB follows the order S < C < N, implying a preferential orientation where the sulfur atom points toward BCETB. This is in agreement with previously observed higher affinities of the thiocyanate ion to apolar surfaces compared to more strongly hydrated anions. 31,32 As previously mentioned, the enrichment of thiocyanate ions around BCETB compared to the other counterions could contribute to the stabilization of the complex with one or two potassium ions bound. ■ CONCLUSIONS We have discovered counterintuitive thermodynamics governing the binding of potassium ions to a ditopic bis(18-crown-6) receptor. By means of ITC, we have found that the binding of a second potassium ion is enthalpically favored over that of the first, ΔΔH bind°< 0, despite the electrostatic repulsion one might expect is introduced upon binding of the second ion. The experimentally observed enthalpic stabilization is supported by continuum electrostatic theory involving the temperature derivative of the dielectric constant. The experimentally estimated positive ΔΔG bind°r esults from a much larger entropic penalty associated with the binding of a second K + compared to that of a first, which compensates for the experimentally estimated negative ΔΔH bind°. Comparison of the experimentally estimated binding constants (K 1 and K 2 ) also revealed negative cooperativity between the two binding sites in BCETB (ρ < 0). The observed thermodynamic trends (ΔΔH bind°< 0 and ΔΔG bind°> 0) were found to be independent of the counterions investigated. However, thiocyanate further enhances the negative value of ΔΔH bind°c ompared to chloride, iodide, and sulfate ions. A possible explanation for this is the relatively higher affinity of the weakly hydrated thiocyanate ion to apolar surfaces compared to more hydrated anions. 31,32 This was further supported by the analysis of the distribution of counterions around BCETB with one or two potassium ions bound, where we found a significant enrichment of thiocyanate close to BCETB compared to the other counterions. Through the analysis of the REMD simulation trajectories using DFT-derived partial charges, we have provided further insight into the molecular mechanisms underlying the experimental observations. In particular, we have analyzed the conformational space of the free BCETB and its complexes with one and two potassium ions bound from simulations with potassium chloride, revealing four distinct conformations. The probability of the most compact conformation with both CEs bending inward (denoted COL) was found to increase successively upon binding of the first and second potassium ion. By analyzing the internal potential energies within the complexes with no, one, and two potassium ions bound (ΔΔU), we found that these energies alone are responsible for a negative contribution to ΔΔH bind°. By further weighting the Journal of the American Chemical Society pubs.acs.org/JACS Article individual internal potential energies of the different conformations with their respective probabilities, we found that the negative ΔΔU is resulting primarily from the increasingly populated COL conformation upon binding, overcompensating for positive contributions to ΔΔU from the rest of the conformations. While previous studies have found the apparent attraction between metal cations bound to neutral receptors to primarily be a result of the decreased solvation free energy of the whole complex, 17−19 we found only a subtle solvent response upon binding of the potassium ions to BCETB. Instead, we found that the specific interactions within the complex result in increased attraction between the binding sites upon binding of potassium ions, as manifested by the promotion of the COL conformation. The increased probability of the COL conformation upon binding was further found to result in a significant contribution to the negative ΔΔH bind°b y calculating probability-weighted internal potential energies of the complexes. We anticipate that to further elucidate the origin of the pronounced enthalpic stabilization observed for the complex with two potassium ions bound, the role of the counterion in the binding process needs to be studied further. While we herein limited ourselves to only report the relative affinities of the counterions to BCETB, further details about the interactions causing thiocyanate to accumulate around BCETB could reveal counterion-specific contributions to the stabilities of the found principal conformations of the potassium−BCETB complexes. In summary, in this study we have aimed to elucidate the different contributions to the overall interaction between potassium ions bound to a ditopic receptor. By studying the binding process from three different perspectivesexperiment, continuum theory, and atomistic simulationwe have investigated the influence of different factors such as solvation, conformational changes, and the counterion. The discovered perturbation of the conformational ensemble of BCETB upon binding of one or two potassium ions provides insight into the role of the receptor in host−guest chemistry, and we anticipate that this work can be of importance for the design of synthetic receptors. From continuum electrostatics, the negative enthalpic contribution to the interaction between bound cations resulting from the dielectric temperature response of water is a fundamentally important result that can hopefully contribute to our understanding of electrostatic interactions between charged molecules in solution. ■ MATERIALS AND METHODS Materials. BCETB was synthesized following a previously reported procedure. 15 Potassium chloride (99.0−100.5%) and potassium thiocyanate (99.0%) were purchased from Sigma-Aldrich. Potassium iodide (99.0%) was purchased from Acros Organics. Potassium sulfate (99.0%) was purchased from Merck. The salts were dried in an oven overnight prior to weighing. All solutions were prepared in volumetric flasks using deionized water. Isothermal Titration Calorimetry. The isothermal titration calorimetry experiments were performed using a MicroCal VP-ITC instrument having a cell volume of 1.4631 mL. Prior to each titration, the solutions of titrand and analyte were degassed for 5 min at 20°C using a Thermovac instrument. The ITC experiments were performed at 25°C, with 307 rpm stirring and the reference power set to 25 μcal/s. The titrations were performed by injecting 10 μL portions of the titrant ([K + ] = 147−247 mM) into a 0.39−0.40 mM solution of BCETB, with a 300 s delay between each injection. An initial injection of 4 μL was discarded from each data set in order to remove the effect of the titrant diffusing across the syringe tip during the prerun equilibration process. Heats of dilution determined in the absence of receptor were subtracted from the titration data prior to curve fitting. Each titration experiment was performed in triplicate. The raw data were analyzed using Python 3, using both a one-site and sequential binding sites curve fitting model. Further details regarding the ITC experiments are given in the Supporting Information. Density Functional Theory. Partial charges on the atoms in BCETB were obtained using DFT 36 calculations in Gaussian. 40 To generate the input, BCETB was energy minimized in Avogadro, followed by minor adjustments of the atom positions to make the molecule C 2 symmetric. The symmetric structure was used as input to Gaussian, followed by geometry optimization. From the optimized structure, charges were obtained by fitting to the electrostatic potential using the Merz−Singh−Kollman scheme, 41,42 employing the B3LYP functional 43,44 and the 6-31+G basis set. Partial charges were calculated for both the free BCETB and the complexes with one and two potassium ions bound in order to take into account polarization upon binding. Replica Exchange Molecular Dynamics. Replica exchange molecular dynamics simulations were performed using GROMACS 2019/4. 45 Energies of the initial configurations were minimized using the steepest descent algorithm. From the minimized configurations, the system was equilibrated in two steps, and from the equilibrated configuration production runs were performed. For both equilibration and production runs, a leapfrog stochastic dynamics integrator 46 was used, implicitly handling the temperature coupling, and the time step was set to 0.002 ps. In the first equilibration step, the system was run in the NVT ensemble for 20 ps, with an inverse friction constant of 1.0 ps −1 and a heat bath temperature of 298.15 K. In the second step, the system was equilibrated in the NPT ensemble for 1 ns using the Berendsen barostat 47 with a relaxation time of 0.5 ps and a reference pressure of 1 bar. Production runs were performed in the NPT ensemble using the Parrinello−Rahman barostat, 48 with a relaxation time of 1.0 ps and isothermal compressibility of 4.5 × 10 −5 bar −1 . For the solvation process, 30 ns production runs were performed, whereas the simulation time was extended to 50 ns for the complexation processes in order to sample enough of the different regions of the conformational space of BCETB. In all simulations, a cubic simulation box was used with initial dimensions of 4.0 × 4.0 × 4.0 nm 3 (prior to NPT equilibration). For potassium, chloride, and iodide, we used charges and Lennard-Jones (LJ) parameters from the OPLS-AA force field. 49−52 For thiocyanate, we used charges and LJ parameters from a recently developed force field. 32 For BCETB, we applied LJ parameters according to the OPLS-AA force field, whereas the charges were assigned using three different approaches. In the first approach, we assigned partial charges according to the OPLS-AA force field, where the partial charges were equally shifted on all atoms in order to make the compound electroneutral. In the second approach, we used partial charges obtained from DFT calculations on the free BCETB. In the third approach, three sets of partial charges obtained from DFT calculations were used during the creation of the first and second potassium ion in the binding sites: the charges on the free BCETB and the charges on BCETB with one and two potassium ions bound, respectively ( Figure S7 and Table S2 in the Supporting Information). The latter approach allowed us to include the effect of polarization of the complexes upon binding. All simulations were run using the SPC/E 53 water model. Due to the large conformational space of BCETB, replica exchange was utilized in an attempt to achieve more efficient sampling. Simulations were performed at six different temperatures in the range 298.15−318.15 K. The temperature spacing was chosen to be 4 K, which has previously been suggested in order to achieve optimal exchange probabilities in the NPT ensemble. 54 The range of temperatures simulated also enabled the estimation of binding enthalpies. To obtain free energy differences, the GROMACS implementation of the Bennett acceptance ratio 55 was used. Softcore interactions were applied to both electrostatics and Lennard-Jones interactions to prevent the system from having overlapping particles as it is decoupled, with a soft-core α of 0.5, a soft-core power Journal of the American Chemical Society pubs.acs.org/JACS Article of 1.0, and a soft-core σ of 0.3 nm. In addition to the fully decoupled and the fully coupled state, 31 intermediate states were simulated in order to make the histograms of acceptance probabilities overlapping and reduce the errors to a satisfactorily degree. For the analyses of the conformational space of BCETB, internal potential energies within the complexes, solvent response, and counterion affinity, longer simulations of 100 ns were performed. ■ ASSOCIATED CONTENT * sı Supporting Information The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/jacs.1c08507. ITC experiments with additional potassium salts, validation of experimental method, continuum electrostatic theory, correction factors due to the standard state and binding to ditopic receptors, computational details, Van't Hoff plots, PCA analysis, and solvent response (PDF)
10,600.8
2022-02-10T00:00:00.000
[ "Chemistry" ]
The model of low impact development of a sponge airport: a case study of Beijing Daxing International Airport A sponge airport is a new concept of airport stormwater management, which can effectively relieve airport flooding and promote the usage of rainwater resources, often including the application of low impact development (LID) facilities. Although many airports in China have been chosen to implement sponge airport construction, there is a lack of quantitative evaluation on the effect of LID facilities. This paper takes Beijing Daxing International Airport as a case study and develops a comprehensive evaluation on the effect of LID facilities using the storm water management model (SWMM). The performance of four LID design scenarios with different locations and sizes of the rain barrel, the vegetative swale, the green roof, and the storage tank were analyzed. After LID, the water depth of J7 reduces from 0.6 m to 0.2 m, and duration of accumulated water reduces from 5 hours to 2.5 hours. The water depth of J17 reduces from 0.5 m to 0.1 m, and duration of accumulated water reduces from 2 hours to 15 minutes. The capacity of conduits has been greatly improved (Link 7 and Link 17). The application of LID facilities greatly improves rainwater removal capacity and effectively alleviates the waterlogging risk in the study area. doi: 10.2166/wcc.2020.195 s://iwaponline.com/jwcc/article-pdf/doi/10.2166/wcc.2020.195/648788/jwc2020195.pdf Jing Peng (corresponding author) Jiayi Ouyang College of Airport, Civil Aviation University of China, No. 2898 Jin Bei Road, Dongli District, Tianjin, People’s Republic of China E-mail<EMAIL_ADDRESS>Lei Yu Tianjin Lonwin Technology Development Co., Ltd., No. 15 Longtan Road, Hedong District, Tianjin, China INTRODUCTION In recent years, the concept of resilience has been introduced into the engineering field, in particular related to disaster mitigation and management (Cimellaro et The content of sponge city is to advocate the construction of a 'rainwater system' with low impact, so that cities can absorb, save, store, filter, and purify rainwater when it rains like a sponge. When there is demand, it can reasonably release and make full use of the stored rainwater resources (Zhang et al. ). With the increase of airport scale and passenger flow, the contradiction between supply and demand of airport water resources has become increasingly obvious. The water consumption data of some airports have been studied by Carvalho et al. (), who reported that the average water consumption is about 20 L/passenger, with about 800 thousand m 3 of water consumption per year. In addition to potable water, a large amount of water is used to meet the non-potable water requirements (Carvalho et al. ). A variety of models has been established to calculate the cost of rainwater utilization (Fernandes Moreira Neto et al. ). new airport has also carried out design planning for a sponge airport. Through the construction of a digital rainwater management system, the rainwater pipeline system of the new airport has been designed and checked, and the waterlogging risk caused by excessive rainfall evaluated (Ren et al. ; Xie et al. ). In summary, the current rainwater utilization at the airport mainly focuses on the treatment method, utilization cost of rainwater. Few scholars have conducted research on the sponge airport, especially the impact of LID facilities on airport rainwater utilization using a model. Traditional airport construction focuses on the fast drainage mode of rainwater by various underground pipes or open channels. The pressure of the water supply and drainage facilities increase when there is heavy rain, while the rainwater cannot be discharged quickly, increasing the frequency of waterlogging in the airport. Therefore, this paper first puts forward the construction concept of sponge airport, and the LID facilities are designed and built. The airport can respond to heavy rain disasters like a sponge city with good elasticity. When it rains, it absorbs water, stores water, and seeps water. When it is needed, it releases stored water and uses it. Then the software was applied to establish models, which can simulate the runoff of rainstorms and evaluate the effects of LID facilities. The water depth, capacity of conduits, and the number of overflow junctions and conduits are compared before and after implementation of LID facilities. The research can be applied both to sponge cities and airports. It will help to achieve the construction goal of 'safe airport, green airport, smart airport, humanities airport' and contribute to the construction of a green, environmentally friendly and harmonious sponge city. Design of LID facilities The purpose of sponge airport construction is to design and construct a sponge airport by setting LID facilities which can store and discharge naturally, and achieve reasonable infiltrating, detention, storage, purification, reuse, and drainage of rainwater resources in the airport. That can make sponge airports, like sponges, have good 'elasticity' in adaptation to environmental changes and response to rainstorm disasters. When it rains, the airport will absorb, store, infiltrate, discharge and purify water, and, if necessary, the storage water will be released and utilized. The construction of a sponge airport can reduce the airport waterlogging crisis, realize the recycling of airport rainwater resources, improve the ecological environment to the greatest extent, and realize the natural purification and infiltration of rainwater. The LID facilities for a sponge city mainly include biological retention, grass planting ditch, rainwater bucket, permeable pavement, concave green space, and so on. The LID facilities for a sponge airport are designed according to the land use characteristics of the airport. The main method of rainwater collection and utilization is discharge and drainage in the runway, taxiway, and apron area. The LID facilities of seepage, stagnant storage rainwater can be adopted, such as the soil area of the flying area, terminal area, and pavement. In this paper, the performance of four LID facilities with different locations and sizes of the rain barrel, the vegetative swale, the green roof, and the storage tank are analyzed. The applied location and role of LID facilities can be seen in Table 1. Software Many researchers use models to predict flood, forecast river flow, and produce storm water management (Mosavi et al. Rabori & Ghazavi ). SWMM has good versatility, relatively low demand for research data, no time step limit, and no scale limit. Therefore, the study presented in this paper investigates how SWMM can be implemented to simulate flood of the study airport. Study area Beijing is located in the north of China, with uneven rainfall distribution in the year. There are many sudden rainstorms in summer, which often lead to flood disasters. Thus, Beijing Daxing International Airport is taken as an example to study. Beijing Daxing International Airport is located in the Daxing District of Beijing. It is about 50 km from the center of Beijing. The location is shown in Figure 1. The airport is being constructed and operated in stages. The planned land area for this period (2020) is about 27 km 2 . In the first phase of airport construction, the flight area, maintenance area, and freight area is large. The hard-surfaced area of the airport accounts for 69%. It will naturally change the original characteristics (Ge et al. ). According to the topography of Beijing Daxing International Airport, it can be divided into seven drainage zones, namely, N1, N2, N3, N4, N5, N6, and S1. Because the rainwater drainage of the airport is very complex, the rainwater drainage system of Catchment N1 is an independent system (Figure 2), so N1 was taken as an example for simulation. Other catchments (such as N2, N3, and so on) can be further studied in the future. Catchment N1 includes the maintenance area, part of the flight area, and the west part has an area of 1.0 × 10 5 m 2 and a capacity of 2.7 × 10 5 m 3 . Storm water was conveyed to the storage N1 through the nearest channels or pipes in the form of catchment surface runoff. The layout of rainwater junctions, conduits, and storage N1 in Catchment N1 is shown in Figure 2. Rainfall data Beijing has a temperate continental climate, which is hot and rainy in summer and prone to severe rainstorms. The aim of this paper is to verify whether the rainwater drainage On this basis, the model is simulated in a 1-hour rainfall scenario with a 100-year return period ( Figure 3). The rainfall series used in this simulation is assumed to be an independent rainfall event. There is no rainfall before it (if there is rainfall before it, the junctions and conduits may have water accumulation before simulation. The simulation results will be affected by the previous rainfall, so it is impossible to evaluate the effect of this rainfall). Model parameters' setting The parameters of SWMM include hydrologic parameters and hydraulic parameters. The hydraulic parameters include the diameter, length, shape, and elevation of rainwater drainage pipes, which can be set according to the drainage engineering design drawing. The hydrologic parameters include area of the sub-catchment, width, percent slope, percent of impervious, Manning's n of impervious (N-Imperv), depth of depression storage of pervious (Dstore-Imperv), and so on. Some parameters can be set according to land use type of the study area. Some parameters are purely empirical, or empirical parameters with certain physical significance. It is found that the main sensitive parameters affecting the output of SWMM model are N-imperv, N-perv, Dstore-perv, conduit roughness (Hu ). The slope of sub-catchment is obtained by analyzing the slope of regional digital elevation model (DEM) data. Dstore-Imperv and Dstore-perv are obtained by analyzing subsurface properties of the airport. Referring to the experience value of literature and combining with the actual situation, Manning's n can be set. After sensitivity analysis and simulation calculation, the appropriate parameter values are finally set. In this study, the rainwater drainage pipe is concrete channels. Manning's n of concrete channels is set to 0.013. Impervious area includes runway, taxiway and apron, which is mainly cement concrete pavement. N-imperv is set to 0.012. Permeable area is mainly soil surface area with grass. N-perv can be set to 0.2. Dstoreperv is set to 3 mm in the model without LID facilities. Dstore-perv will be increased to 10 mm. The infiltration parameters depend on which infiltration model was selected for the project: Horton, Green-Ampt, or curve number. Green-Ampt requires very high soil data. Curve number only reflects the underlying surface of the basin and does not reflect the rainfall process. It is only suitable for large basins. Horton is often used in rainfall-runoff simulation of urban small watersheds. Therefore, Horton is adopted in this study. RESULTS AND DISCUSSION In this study, a traditional hydrological model was built The results of simulation show that part of the rainwater pipe network is the key to restricting the rainwater drainage system, and there is a bottleneck area. Due to the full flow of Link 6 and Link 7, rainwater cannot be discharged in time and quickly, resulting in water accumulation at J7 and J8. Due to the full flow of Link 14, Link 15, and Link 16, rainwater cannot be discharged in time and quickly, resulting in water accumulation at J15, J16, and J17. These conduits are the cause of bottlenecks in the rainwater drainage system in the study area. In order to avoid water accumulation, the size of these conduits can be expanded. It is difficult to modify the size of conduits for the designed drainage system, so the LID facilities are adopted in the bottleneck area and studied to see whether they can improve the drainage capacity of the study area. Simulation after applied LID facilities It can be seen that many junctions and conduits appear as There is a large maintenance area in the sub-catchments S5-S8, and meanwhile they are the bottleneck area. Due to there being mainly runway, taxiway, and soil area in the subcatchments S15-S17, the vegetative swale is installed. In addition, in order to relieve the rainwater drainage pressure of Link 14, Link 15, and Link 16, it is necessary to set storage tanks along the way to store some rainwater. The storage tanks along the way are set up between J16 and J18, and the volume of storage is 4.0 × 10 4 m 3 . The situation of the applied LID facilities can be seen in Table 2. Based on the analysis of simulation results of the model after setting LID facilities, the water elevation profiles and Figure 7(a) is the water depth of J7 and J17 before the applied LID facilities. Figure 7(b) is the water depth of J7 and J17 after the applied LID facilities. Table 3. The LID facilities greatly increase the reduction rate of surface runoff of rainwater, and also increase the reduction rate of the number of overflow junctions and full flow conduits to a certain extent. A series of LID facilities has been adopted to greatly improve the rainwater removal capacity and effectively alleviate the risk of waterlogging in the study area. CONCLUSIONS At present, the research on sponge airports mainly focuses on the design of LID facilities and the construction of sponge airports, and few researchers use software to simulate the implementation effect of LID facilities. In this paper, rainwater-runoff simulation models before and after implementation of LID facilities were developed using SWMM. The developed models were applied on a rainwater drainage system at a catchment of Beijing Daxing Airport. The sponge airport models were implemented to calculate the water depth, the number of full-flow conduits and overflow junctions, duration of accumulated water before and after applying LID facilities. According to the results and discussions, the following key findings can be concluded: There are some limitations in the study. For future research, more sensitivity of parameters should be studied in detail, and further calibration of the model with more practical measured data. In addition, we can further simulate the rainwater drainage system operation of other areas in Beijing Daxing Airport, selecting the optimal LID and control strategy and realizing accurate rainwater-runoff simulation and improving flood control and management.
3,246.6
2020-02-06T00:00:00.000
[ "Environmental Science", "Engineering" ]
Research on Cloud-Edge-End Collaborative Computing Offloading Strategy in the Internet of Vehicles Based on the M-TSA Algorithm In the Internet of Vehicles scenario, the in-vehicle terminal cannot meet the requirements of computing tasks in terms of delay and energy consumption; the introduction of cloud computing and MEC is an effective way to solve the above problem. The in-vehicle terminal requires a high task processing delay, and due to the high delay of cloud computing to upload computing tasks to the cloud, the MEC server has limited computing resources, which will increase the task processing delay when there are more tasks. To solve the above problems, a vehicle computing network based on cloud-edge-end collaborative computing is proposed, in which cloud servers, edge servers, service vehicles, and task vehicles themselves can provide computing services. A model of the cloud-edge-end collaborative computing system for the Internet of Vehicles is constructed, and a computational offloading strategy problem is given. Then, a computational offloading strategy based on the M-TSA algorithm and combined with task prioritization and computational offloading node prediction is proposed. Finally, comparative experiments are conducted under task instances simulating real road vehicle conditions to demonstrate the superiority of our network, where our offloading strategy significantly improves the utility of task offloading and reduces offloading delay and energy consumption. Introduction With the development of 5G technology and smart connected cars, cars have become equipped with stronger computing and storage capabilities, as well as information collection and communication capabilities, and many new in-vehicle applications have emerged. Although these applications can enhance the user experience and improve driving safety, such as AI-based applications, virtual reality, intelligent assisted driving, image navigation, and entertainment applications, they all have high requirements for computing and storage resources and are sensitive to latency. The computational demand for the Internet of Vehicles has thus boomed [1][2][3], and the limited computational storage resources of in-vehicle terminals cannot meet the resource demand of computational tasks with high complexity, data density, and delay sensitivity [4]. The introduction of cloud computing and Mobile Edge Computing (MEC) into the Internet of Vehicles is an effective way to solve the above problems, but the in-vehicle terminals have high requirements for task processing delay because the cloud computing uploads computing tasks to the cloud with high delay. Furthermore, the computing and storage resources of edge computing servers in MEC are limited, and more tasks will increase the task queuing delay at the server. Therefore, collaborative central cloud, edge cloud, and vehicle cloud computing provide better computing services for task vehicles. The vehicle cloud is a resource of simultaneously empty idle vehicle terminals [5]. Initial progress has been made in the research of collaborative computing for the Internet of Vehicles scenario, where the computational offloading decision is the core research point, and the key is to find the optimal offloading decision to improve the computational efficiency and reduce the computational cost. Unfortunately, the computational offloading problem of collaborative computing in the Internet of Vehicles scenario is a mixed integer nonlinear programming (MINP) problem, which is difficult to solve directly using traditional mathematical methods. Although many scholars have studied computational offloading strategies for the Internet of Vehicles scenario, there is no popular general solution method yet. Based on the above, we study the offloading strategy of collaborative computation at the cloud-edge-end of the connected vehicle scenario with real road conditions and vehicle motion and fully adopt the intelligent swarm optimization algorithm to solve the problem and comprehensively optimize the computational delay and energy consumption. The main contributions of this work can be summarized as follows: (1) A three-layer architecture of the central cloud, edge cloud, and vehicle cloud is proposed as a cloud-edge-end collaborative computing system model with the task offloading strategy problem in an Internet of Vehicles scenario. (2) A Multi-strategy collaboration-Tunicate Swarm Optimization Algorithm (M-TSA) introduces a memory learning strategy, a Levy flight strategy, and an adaptive dynamic weighting strategy on the basis of the standard TSA algorithm, has stronger global optimization seeking capability, and is proposed for multi-objective optimization of offloading delay, energy consumption, and task offloading utility of cloud-edge-end collaborative computing systems. (3) To address the offloading strategy problem presented in (1), a computational offloading strategy based on the M-TSA algorithm and combined with task prioritization and computational offloading node prediction is proposed to significantly improve the system offloading utility and reduce the computational offloading delay and energy consumption of the system by taking into account the vehicle motion characteristics and task time delay sensitivity. Edge Computing Offloading Many experts and scholars currently have relatively mature research work on computational offloading strategies, mainly optimizing or jointly optimizing metrics such as time delay and energy consumption. There are mainly computational offloading frameworks based on mathematical models [6,7], computational offloading schemes based on intelligent optimization algorithms such as genetic algorithms [8][9][10], and whale optimization [11]. Offloading strategies for collaborative computing have also been studied. Dai et al. [12] designed a probabilistic computational offloading algorithm for cloud-edge collaborative computing and verified its superiority in reducing task delay in a wide range of scenarios. Abbasi et al. [13] addressed the problem of allocating workloads in a fog cloud scenario and proposed a trade-off between task processing energy consumption and delay for the NSGA-II algorithm to solve this multi-objective model, and they experimentally showed that both energy consumption and delay were significantly reduced. Huang et al. [14] studied an optimal offloading scheme considering energy minimization, which addresses the relationship between energy efficiency and performance in mobile The relationship between energy efficiency and performance in mobile edge computing systems was investigated. The results showed that this scheme is better than other offloading methods. Zhao et al. [15] explored the collaborative computation offloading problem in a MEC system with multiple users in a heterogeneous cloud system. Based on dynamic planning, an energy consumption minimization algorithm with joint bandwidth and computational resource allocation was proposed, and the simulation results showed a reduction in energy consumption for mobile devices. Ramtin et al. [16] proposed an offloading scheme based on inter-device collaboration for jointly optimizing energy consumption and delay in edge computing, applying the maximum matching and minimum cost graph algorithms to derive a reasonable offloading scheme, and the results showed a reduction in energy consumption and delay. Fu et al. [17]-taking into account the effects of changing network conditions and wireless channel constraints-proposed an improved firefly swarm algorithm that optimizes computation offloading and resource allocation to reduce computation system latency and energy consumption. Li et al. [8] proposed a genetic-algorithm-based two-stage heuristic for joint computation offloading and resource allocation in multi-user and multi-server scenarios, and they proved the effectiveness of their algorithm for reducing terminal energy consumption. Su et al. [18] proposed a resource deployment and task scheduling algorithm based on task prediction and Pareto optimization. The user service quality and system service effect were significantly improved. Collaborative Computing Offloading under the Internet of Vehicles Preliminary research has also been conducted on collaborative computational offloading strategies for special scenarios of the Internet of Vehicles. Zhao et al. [7] proposed a joint optimization scheme for computational offloading and resource allocation based on a mathematical model to effectively improve the system utility and computation time of MEC in scenarios with insufficient computational resources; however, their algorithm is not general. Xu et al. [19] proposed an adaptive multi-objective evolution (ACOM) offloading method for IoCV scenarios with the introduction of 5G, which reduces the task offloading delay but does not consider the impact of vehicle mobility characteristics. Song et al. [20] constructed a unidirectional highway model under which edge servers and vehicle servers work together, described a safe switching interaction protocol while the vehicle is moving, and reduced offloading energy consumption and delay. Zhang et al. [21] constructed an SDN-assisted MEC network architecture for vehicle networks and proposed a joint task offloading and resource allocation strategy that can effectively reduce system overhead. Zhu et al. [22] designed a cloud-edge collaborative-based vehicular computing network architecture, proposed an offloading strategy scheme based on an improved multi-objective optimization immune algorithm, and verified the effectiveness of the algorithm. In the research of Shen et al. [23], a hybrid genetic algorithm (HHGA) task offloading strategy with a hill-climbing operator was proposed for mobile edge computing with on-street parking collaboration in the Internet of Vehicles to reduce the delay and energy consumption of computational tasks. Lastly, Su et al. [24] proposed an improved sparrow-algorithm-based computational offloading decision for cloud-edge collaborative computing to fully optimize task delay and energy consumption. In summary, the research on edge computing offloading strategies is more mature, but the research on collaborative computing offloading for special scenarios such as the Internet of Vehicles is lacking, and the impact of hidden vehicle movement characteristics, task priority offloading, real road traffic conditions, and other factors, as well as the problem of ignoring idle vehicle terminal resources, are seldom considered in the research on computing offloading for Internet of Vehicles scenarios. To address the above problems, we will study the cloud-edge-end collaborative computational offloading strategy under the real road traffic condition and vehicle movement in the Internet of Vehicles scenario. Cloud-Edge-End Collaborative Computing System Model in the Internet of Vehicles Scenario The cloud-edge-end collaborative computing network in the Internet of Vehicles scenario described in this system consists of vehicles, base stations (BS), Edge Computing Servers (ECS), and Cloud Servers (CS). As shown in Figure 1, in a two-way straight-road scenario, many base stations equipped with edge servers are evenly deployed on the roadside, and their communication coverage radius is L. The vehicles and ECS in the communication area of BS are called an edge computing domain. There are two types of vehicles in an edge computing domain: one is task vehicles (TaV) that generate computational tasks; the other is service vehicles (SeV) that have many available computational resources and can provide computational services to the outside world. The set of edge servers is denoted as To efficiently utilize the spectrum, this system considers an OFDMA-based wireless network that connects the ECS with the task vehicle TaV and the service vehicle SeV to form a star topology, where each vehicle can communicate with the ECS in one leap point; wired connections are used between adjacent edge servers and between the ECS and the CS. In this computing network, ECS is the manager of the computing domain and is responsible for the scheduling and allocation of all tasks. At the beginning of each time slot, each vehicle in the computing domain uploads task information and computing resource information to the edge server. There are vehicles with many available computing resources, which are what we call SeV. ECS aggregates the computing tasks and the resources of service vehicles through the intelligent scheduling of tasks, which can provide higher-quality computing services to the task vehicles at the end of the network. The parameters used in this paper are listed in Table 1. There are mainly vertical and horizontal collaborative computing methods for the vehicles described in this system model, and there are various servers that can provide computing offload services for the task vehicles in this model, namely, cloud servers, edge servers, terminal devices of the service vehicles, and terminal devices of the task vehicles themselves. Through the intelligent scheduling of tasks, the effective utilization of global resources can be realized, and the task vehicles at the end of the network can be provided with a more high-quality computing offload service. Vertical and horizontal collaboration are differentiated as follows: (1) Vertical collaboration: Comprised of the vehicle cloud, edge cloud, and central cloud, the three-layer Internet of Vehicles edge computing architecture provides multiple offload mode options for resource-constrained task vehicles. Thus, task vehicles can choose to process their tasks locally according to the actual situation or offload tasks to neighboring service vehicles, edge servers, and cloud servers to achieve task processing. (2) Horizontal collaboration: The distribution of resources in the time dimension of edge servers often shows variability. Lightly loaded edge servers may cause waste due to unutilized resources, while overloaded servers may affect the normal processing of tasks due to insufficient resources. Therefore, cross-domain edge collaborative computing can be used to improve the efficiency of system resource utilization and enhance the offloading utility of tasks. The amount of data required to complete the task. c i The amount of computation to complete the task. The maximum delay limit of the computation task. Vehicle Motion Model The system uses a two-dimensional coordinate system to model the motion process of the vehicle, as shown in Figure 2, denoting the BS side of the road as the x-axis and the vertical line of BS as the y-axis, thus assuming that the coordinates of BS as (0, h), where h is the linear distance between BS and the road. The TaV i movement pattern can be represented by a binary group as {(x i , y i ), v i }, whereby (x i , y i ) is TaV i the starting position and v i is the TaV i travel speed. Assuming that the right is the positive direction, a positive sign of v i indicates that TaV i rightward travel, and the negative sign of v i indicates that TaV i travels to the left. Similarly, the SeV j movement pattern is represented by the binary group {(x j , y j ), v j }.The standard lane width of the road is 3 m. This system assumes that all vehicles travel in the middle of the lane, i.e., the vehicle vertical coordinate is y ∈ {−1.5, −4.5}. This system establishes a vehicle movement model constrained by speed and distance to simulate the real road vehicle driving environment. Since the calculated offloading time of the vehicle ∆t is very small, it is assumed that the vehicle maintains a uniform speed during the time ∆t, that is, v i (t + ∆t) = v i (t), v i (t) denotes the vehicle vh i velocity at t moment. There are two constraints in this model: (1) Speed constraint: Because there is a speed limit on the real road, the speed of each vehicle must be maintained in a range, i.e., v i ∈ [v min , v max ]. (2) Distance constraint: Two vehicles in the same lane, vh i at position x i and vh j at position x j , need to satisfy x i − x j ∈ [l min , l max ]. l min indicates the minimum distance between two vehicles driving continuously on the same lane, also known as the safety distance; if the distance between two cars is too small, it will increase the risk of traffic accidents. l max denotes the maximum distance between two vehicles driving continuously on the same lane; if the distance between the two vehicles is too large, it will be a waste of traffic resources. Therefore, the distance between two vehicles must be kept within a reasonable range. Computational Model The computational task for a single time slot of each task vehicle TaV i ∈ Ta is considered in this system, denoted as T i , which is the smallest task and cannot be divided into subtasks. Each task vehicle TaV i generates a computational task T i , which is represented is the amount of data required to complete the task, that is, the amount of input data required for the computational task execution to be transmitted from the task vehicle local device to the service computation node; c i (cycles) means the amount of computation to complete the task; and t max i refers to the maximum delay limit of the computation task, determined by the task type. Each task can be executed locally in the task vehicle or offloaded to the service vehicle SeV j , the ECS, or the CS. Each service compute node has independent storage resources B n and compute resources C n . The task vehicle saves energy and task processing time by offloading the compute tasks to the service compute node; however, the amount of compute task input data sent to complete the task in the compute task offload adds additional time and energy consumption. This section defines the task offloading variables, and the equation includes the upstream scheduling as follows: {a i,j , i ∈ Ta, j ∈ N}, where a i,j = 1 means that the task vehicle from TaV i of the task T i is offloaded to the service compute node; otherwise, a i,j = 0. Since each task can be executed locally or offloaded to up to one service compute node, a feasible offloading strategy must satisfy the following constraints: The location of the calculation task T i generated by the task vehicle TaV i is as follows: For each task vehicle TaV i ∈ Ta generated, due to limited computing resources, some of the computation tasks need to be transferred to the SeV, the ECS, or the CS, which then performs the computation. Since the SeV computation storage resources are limited, in this study, SeV considers single-task computation and does not create a task cache. In this system, a task queue model of the task buffer of ECS and CS is established, and Q(t + 1) denotes the accumulated tasks at the moment t + 1, that is, where Φ(t) is the size of the computational task that leaves the task buffer of ECS at time slot t, i.e., the task for which ECS completes the computation, and D(t) is the size of the computational tasks that are offloaded to the task buffer of the ECS by the task vehicle at time slot t. Delay Model Once the task vehicle TaV i 's calculation of task T i processing is complete, the resulting delay time t i includes: i. upload delay t up (s)-the time to transmit the input on the uplink to the service node N; ii. cache delay t q (s)-the queuing time in the task buffer; iii. computation delay t exe (s)-the task computation processing time; and iv. the time to transmit the output on the downlink from the service compute node N to the task vehicle TaV i . These are described in more detail as follows: (1) Upload delay: When the transmission rate of the communication between the computing nodes and the amount of task input data b i are related, then the upload delay t up of task T i is where r denotes the transmission rate of communication between the computing nodes, and b i is the amount of input data required to transmit the program execution of the computing task from the local user device to the computing node. The upload delay of the task is divided into vehicle-to-vehicle transmission delay between edge servers, and upload delay t E2C i (E2C) from the edge server to the central cloud server. (2) Cache delay: When the computational task T i offloaded to the ECS or CS at moment t, the task cache queuing time t q is defined as (3) Computation delay: Set f > 0 (cycles/s) denotes the CPU computing capacity of the computing node. Therefore, the task computation delay t exe Since the output data volume is usually much smaller than the input, and the data transmission rate of the downlink is much higher than that of the uplink, the transmission delay of the output is omitted in this model calculation, as also considered in [25][26][27]. There are four categories of computational processing described in this system, namely, local computation, service vehicle computation, edge server computation, and cloud server computation, where the binary variable y i,j = 1 denotes that the task vehicle TaV i of the task T i is offloaded across the domain to the collaborative edge server computation. y i,j = 0 denotes edge computing within the edge computing domain. Then, the task T i of the total computation delay computed at t i is The total time delay for this collaborative computing system is Energy Consumption Model The main consideration is the task vehicle's energy consumption in this system, which is divided into local calculation energy consumption and task offloading energy consumption. The energy consumption generated by the task vehicle TaV i while it performs task T i locally, or the amount of the local computation energy consumption e loc i , is where p c indicates the power of the vehicle terminal's CPU. Task vehicles TaV i perform task offloading to service computing nodes generated by transmission energy e i , denoted by where p up denotes that the vehicle terminal transmits power, and ξ i is the power amplifier's efficiency of the task vehicle TaV i . In a general case, this system assumes that ξ i = 1, and the task vehicle TaV i in the uplink energy consumption is calculated simply as e [27]. The task T i of the task vehicle TaV i is executed, and its generated energy consumption is The total energy consumption of this collaborative computing system is Prioritization Model To improve task offloading utility, a comprehensive evaluation of computational task T i is performed based on task delay constraints and local computational urgency, and the priority of computational task T i offloading is determined. The model uses a mixed weighting approach to prioritize the computational task T i , where priority Pr i is defined as where λ 1 , λ 2 ∈ [0, 1] and satisfies λ 1 + λ 2 = 1, and U i and W i are weighting factors. W i denotes the task value of the task T i . U i denotes the computational task T i the task urgency, and t loc i is the local execution time of the task. Computational Offloading Strategy Problem For the cloud-edge-end collaborative computing system model in the Internet of Vehicles scenario proposed in Section 3, this section elaborates on the problem of the system task offloading strategy. In edge computing systems, the quality of service is mainly expressed in terms of the delay and energy consumption generated by the computational task completion. In the considered Internet of Vehicles scenario, this paper, considering both delay and energy consumption improvements, defines the task offloading utility of task vehicle TaV i is defined as where δ t i is the time delay weight, δ e i denotes the energy consumption weight, and δ t i + δ e i = 1, δ t i , δ e i ∈ [0, 1], and i ∈ Ta. For example, a task vehicle TaV i with a small battery capacity can increase δ e i , decreasing δ t i and thus saving more energy at the cost of longer task delay. The task offloading utility of the system described is expressed asF = {F i | i ∈ Ta}. For a given offloading strategy X, the present collaborative computing system task offloading strategy problem is formulated as a problem of maximizing the offloading utility of the system, that is maxF where ϕ i denotes that TaV i and BS can remain connected, ϕ i,j denotes that TaV i and SeV j can remain connected, and they are calculated as where L denotes that TaV i can move a lateral distance within the communication range of V2I at a fixed transmission power, R denotes that TaV i can move a lateral distance within the communication range of V2V at a fixed transmission power, and sign(·) is a symbolic function, which is expressed in this equation as follows: denotes that the two vehicles TaV i and SeV j have the same speed and that the initial position is within the communication range, whereby the two vehicles can keep communication for a long time, thus assigning ϕ i,j to an enormous value. In other cases, when (x i − x j ) sign v i − v j > 0, this indicates that TaV i and SeV j are moving away from each other; when (x i − x j ) sign v i − v j < 0, this indicates that TaV i and SeV j are moving closer to each other. The constraints in Equation (15) are explained as follows: constraints C1 and C2 imply that each task can be executed locally or offloaded to at most one service computing node; constraint C3 implies that each service vehicle can service at most one task vehicle; constraint C4 specifies that each task must be completed within the specified maximum time delay limit; constraint C5 specifies that the task offloaded to the service vehicle must be completed within the two-vehicle maintain-communication time, or that offloading to the ECS must be completed within the hold-communication time with the BS; constraint C6 specifies that the straight-line distance d i,j between the task vehicle and the service vehicle for both vehicles must be no greater than the communication distance R for the task to be offloaded. Multilateral Collaborative Computing Offloading Strategy Based on the M-TSA Algorithm To cope with the more complex cloud-edge-end collaborative computing system in the Internet of Vehicles scenario, this section proposes a multi-strategy collaborationbased TSA algorithm (M-TSA) and then proposes a multilateral collaborative computing offload strategy based on the M-TSA algorithm. The M-TSA algorithm, which introduces multiple population evolution strategies into the TSA algorithm, can better meet the optimization of the computational offload quality of service metrics (system delay, system energy consumption) in the collaborative computing system in the Internet of Vehicles scenario. Standard TSA Algorithm The Tunicate Swarm Algorithm (TSA) is an intelligent swarm optimization algorithm proposed by Kaur et al. [28] to simulate the foraging behavior of a swarm of animals in the ocean. Its execution includes jet propulsion and group behavior. It has the advantages of a simple structure, strong local search ability, high accuracy of search and optimization, and has been validated in function optimization problems and engineering applications. However, the search mode is single and there is no individual memory, so the local search is not sufficient and the accuracy is low when solving high-complexity problems. Jet Propulsion Equation (18) denotes the principle of conflict avoidance between individuals, A denotes the factor of conflict avoidance between individuals, G is gravity, and c 1 , c 2 , c 3 is a random number between [0, 1], respectively. H represents the social interaction between individuals, and p min , p max are the initial and subordinate velocities of social interactions between individuals, respectively, setting p min = 1, p max = 4. Equation (19) denotes the movement toward the optimal individual, PD denotes the distance between the food (optimal individual) and the individual, k is the current iteration number, FS is the position of food, and P p (k) denotes the current position of the individual. Equation (20) denotes convergence to the optimal individual, and r and is a random number between [0, 1]. Swarm Behavior Equation (21) represents the location of the optimal solution of the updated individual, which is calculated based on the optimal location of the current two generations of search individuals, and the tunicate individuals perform swarm behavior to gather towards the food's (the optimal individual's) location. M-TSA Algorithm For the complex computational offloading problem of the more complex cloud-edgeend collaborative computing system in the vehicle networking scenario, the M-TSA algorithm is proposed to improve the algorithm's global exploration and local exploitation capabilities by introducing a memory learning strategy, a Levy flight strategy, and an adaptive dynamic weighting strategy based on the standard encapsulated swarm algorithm, as described below. Memory Learning Strategy The memory learning strategy, introduced by Particle Swarm Optimization (PSO) memory learning, includes the update speed v and position x, as where v i is the individual velocity, X i is the individual position, pbesti is the individual optimal solution, gbest is the global optimal solution, ω is the inertia factor, c 1 is the self-learning factor, and c 2 is the swarm learning factor. The memory learning of PSO is introduced in this algorithm to strengthen the self-memory learning; let c 1 = 2, c 2 = 2, r and is the random number between [0, 1], k is the current population iteration number, K is the maximum iteration number, and it is taken in this algorithm ω max = 0.9, ω min = 0.4. The dynamic inertia factor ω has better merit-seeking results than fixed values, so this algorithm adopts a linearly decreasing weight strategy, with the iteration period ω decreasing gradually, and the swarm individuals have strong global search merit-seeking ability in the early stage and enhanced local search merit-seeking ability in the later stage. Levy Flight Strategy To increase the diversity of populations, this algorithm introduces a stochastic crosslearning strategy based on Levy flight, which allows the algorithm to have greater randomness in the optimization process and avoid the algorithm from falling into the local optimum. where u ∼ N(0, σ u ), s ∼ N(0, σ s ), r and is a random number between [0, 1], and J ∈ [0, 1] denotes the probability variable that determines which cross-learning method is used by the individuals in the population. In order for individuals in the population to select each cross-learning mode with equal probability, set J = 1/3. α denotes the cross-learning coefficient; levy(·) denotes the random number that satisfies the Levy distribution. Adaptive Dynamic Weighting Strategy To improve the performance of the TSA algorithm, an adaptive dynamic weighting strategy is proposed to balance the global exploration and local exploitation capabilities of the TSA algorithm. For the position of each capsule individual, we use the following equation to enhance the algorithm's ability to search for the global optimum and increase the current capsule search step to enhance the algorithm's ability to escape the extreme values, calculated as follows. P p (k) = FS + 2 · A · PD, r and ≥ 0.5 FS − A · PD, r and < 0.5 (26) where A is the conflict avoidance factor between individuals, PD is the distance between the food (optimal individual) and the individual, k is the current iteration number, FS is the position of the food, P p (k) denotes the position of the current individual, and r and is a random number between [0, 1]. To balance the global exploration and local exploitation abilities of the capsule swarm algorithm, this paper proposes an adaptive dynamic weighting strategy to update the positions of capsule individuals. In this strategy, the updated formula for the position of the capsule individual includes the current position of the individual, the position of the previous generation individual, and the adaptive weight. The size of the adaptive weight is related to the position of the capsule individual and can be dynamically adjusted during the iteration of the algorithm. When the adaptive weight is larger, the step size of the individual position update is smaller, which is beneficial to the global exploration ability of the algorithm. When the adaptive weight is smaller, the step size of the individual position update is larger, which is beneficial to the local exploitation ability of the algorithm. Compared with the random parameters in the original TSA algorithm, the adaptive dynamic weighting strategy can improve the performance of the algorithm and avoid the problems caused by the blindness of the algorithm. The specific calculation formula is shown as follows: where k is the number of current iterations and K is the maximum number of iterations. The swarm behavior update formula for introducing adaptive dynamic weight values in the swarm behavior of the M-TSA algorithm is where P p (k) denotes the current individual's position, and z denotes the adaptive dynamic weight value. During the iteration of the algorithm, the adaptive weight value decreases gradually with time, which leads to an overall increase in the position update weight and a corresponding increase in the update step size, which makes the algorithm a strong exploration capability at a later stage. Adaptive Dynamic Regulation of Populations This algorithm performs adaptive dynamic adjustment of the number of individuals performing memory cross-learning and jet propulsion to enhance the ability of global full search finding of the population. In the early iteration, most of the individuals in the population of this algorithm performed memory cross-learning to increase the population's local search and enhance the global search directionality in the later iteration. In the later iteration, to avoid falling into the local optimal results, most of the individuals performed TSA jet propulsion mode to improve the algorithm's ability to jump out of the local for global search, effectively balancing the local search and global search abilities. The algorithm uses an adaptive decay adjustment strategy for the number of subgroup individuals num, as defined below. where k is the number of current population iterations, K is the maximum number of iterations, and S is the overall population number of individuals. The variable gbest(k) is used to denote the global optimal individual at generation k. The steps of the population adaptive dynamic adjustment algorithm are shown in Algorithm 1. M-TSA Algorithm Steps The flow of the M-TSA algorithm is shown in Figure 3, and the specific steps are as follows. The M-TSA algorithm pseudocode is shown in Algorithm 2. Algorithm 2 M-TSA algorithm. Input: S, K, populationX Output: X best procedure M-TSA p min ← 1 p max ← 3 X1 ← 0 pbest, X best ←CaculateFitness(X)/* Initialize the individual fitness value using the Cal-culateFitness function.*/ for k ← 1 to K do num ← Anum( S, k, K, X best ) /* Anum() is adaptive dynamic regulation of populations algorithm, see Algorithm 1.*/ for i ← 1 to S do if i < num then r and ← Rand() /* Rand() is a function to generate the random number in the range [0, 1]. */ if r and < Cr then X1 ← X + v /*Memory learning strategy according to Equation (22). */ else X1 ← Levy_strategy() /* Levy_strategy() is a function that Levy flight strategy according to Equation (23).*/ end if else /*Jet propulsion according to Equations (18), (19), (26) /*Adaptive swarm behavior to Equation (28).*/ X ← (X1 + X)/(2 + z) end for pbest, X best ←CaculateFitness(X) /* Calculate the individual fitness value of the new population using CaculateFitness function*/ end for return X best end procedure procedure CaculateFitness(X) X best ← X[argmin(pbest f it)]/* argmin() is the function to obtain the minimum value index */ end if return pbest, X best end procedure Step 1: First, according to the task example, randomly generate the initial population of the capsule; the individuals in the population include the location to be optimized x i (i = 1, 2, . . . , n) and its fitness value to be optimized f i . Step 2: Calculate the individual fitness value f i , and derive the initial per-individual optimal solution pbest i and the global optimal solution gbest. Step 3: Based on the update situation of the global optimal individual gbest and the number of iterations, the number of subpopulation individuals num is calculated adaptively, as described in Algorithm 1. Step 4: The num individuals of the subpopulation perform memory learning or Levy flight strategy, when the random number r and < Cr, performing memory learning, Cr is the cross-learning factor; the remaining S − num individuals perform the jet propulsion of TSA, where convergence to the optimal individual is calculated according to the improved Equation (26). Step 5: Apply the adaptive dynamic weighting strategy to perform swarm behavior learning according to Equation (28), update its position, and generate a new generation of population. Step 6: New populations are checked for transgression, and individuals beyond the constraint range are processed for transgression. Step 7: Calculate the individual fitness value of the new population, and update the individual optimal solution pbest i and the global optimal solution gbest. Step 8: Determine whether the maximum number of iterations is reached, and if it is satisfied, output the global optimal solution gbest; otherwise, return to Step 2. The offload policy is coded as shown in Figure 4, assuming that the task set T i = {T 1 , T 2 , T 3 , T 4 , T 5 }, M = 4 means there are four edge server ECS, and the offloading strategy X = {1, 0, 3, 10, 2} indicates that T 1 is offloaded to CS for execution, T 2 is executed locally, T 3 is offloaded to ECS 2 execution, T 4 is offloaded to SeV 5 for execution, and T 5 is offloaded to ECS 1 . Computational Offloading Strategy's Code In the TSA algorithm, the fitness function is used to evaluate the distance between the tunicate individual and the food source, i.e., the gap between this solution and the optimal solution to the problem. This offloading strategy is evaluated in three aspects, namely, computational delay, energy consumption, and offloading utility. The fitness evaluation function is constructed with the offloading utility of balanced computational delay and energy consumption, and the fitness evaluation value for the offloading strategy X is f (X), as shown in the following equation: where F denotes the system task offloading utility, F = {F i | i ∈ Ta}. M-TSA Based Multilateral Collaborative Computing Offloading Strategy's Algorithm Steps For the complex computational offloading problem in the cloud-edge-end collaborative computing system in the Internet of Vehicles scenario, multiple evolutionary strategies are introduced, and a multilateral collaborative computational offloading strategy based on the M-TSA algorithm is proposed with the following algorithmic steps: Step 1: Create cloud-edge-end collaborative computing system task instances in the Internet of Vehicles scenario, including creating CS instances, edge computing group Es, task vehicle set Ta, and service vehicle set Se that simulate real road conditions and vehicle movement. Step 2: According to Equation (13), calculate the task offloading priority of each task T i in each edge computing domain and determine the task offloading order from the highest priority to the lowest priority. Step 3: According to Equation (15) constraints C4-C6, predict the set of task unloadable nodes SeN, i.e., the individual boundary of the population, to narrow the search range of the algorithm and improve the task unload utility. Step 4: Execute the M-TSA algorithm, see Section 5.2.5 for details. Take the ordered task set Ta and the predicted node set SeN as inputs, and execute the M-TSA algorithm to derive the optimal computational offloading decision X best . Simulation Verification To verify the effectiveness of the proposed M-TSA-based multilateral collaborative computing offloading strategy, this section presents our simulation experiments using Python, and the main parameters of the experiments are shown in Table 2. The experiments are conducted to compare three computing systems, namely, cloud-edge-end collaborative computing, end-edge collaborative computing, and local computing. Further, the experiments are conducted to compare the offloading strategies based on the M-TSA algorithm with the TSA algorithm, PSO algorithm, Grey Wolf Optimizer algorithm (GWO), and Differential Evolution Algorithm (DE) offloading strategy comparison experiments. The algorithm parameters of this experiment are set as iteration number K = 50, population size size = 40, one central cloud server, four edge computing domains, and a random group of vehicles in one computational domain. The communication simulation parameters of V2I and V2V are set with reference to the 5G-V2X network standard adopted by most car companies nowadays. Cloud-Edge-End Architecture Verification Under the same experimental environment and the same task instance, the mixed weights of delay and energy consumption (δ t i = 0.8, δ e i = 0.2) under the M-TSA algorithm are compared, and the optimization results of cloud-edge-end collaborative computing, end-edge collaborative computing, and local computing for three computing systems on delay and energy consumption are discussed. In the created task instance, the three computing systems are run 20 times independently, and the average of the optimal solutions of the results of 20 runs of each algorithm is taken. As can be seen in Figure 5, in the same experimental environment and with the same task instance input, the solution results of the cloud-edge-end collaborative computing system under the mixed weight evaluation are significantly better than those of the other two architectural computing systems, resulting in a smaller system delay and lower system energy consumption, and the advantage grows as the task computation volume increases. The experimental results show that the cloud-edge-end collaborative computing system is significantly better than other architectures and can realize complementary resources of cloud computing, edge nodes, and vehicle terminal devices, which can be flexibly configured according to the characteristics of the task and real-time demand to better adapt to different task scales. Figure 6 depicts the optimization comparison results of offloading utility, delay, and energy consumption for five algorithms to calculate offloading under mixed weights of delay and energy consumption with the same experimental environment, the same task instance, and the same initial population when the number of TaV is 20 and the number of SeV is 30. In the created task instance, the five algorithms are run 20 times independently with the same input, and the optimal solution is taken from the results of the 20 runs of each algorithm. As can be seen in Figure 6, the solution results of the M-TSA algorithm under the mixed weight evaluation are significantly better than the other four algorithms in the same experimental environment with the same task instances and the same initial population input, obtaining higher offloading utility, a shorter system time delay, and a lower system energy consumption, which is proof that the M-TSA algorithm has a stronger global optimization-seeking ability to derive the optimal computational offloading strategy. In addition, it can be seen in Figure 6 that the M-TSA algorithm can obtain the optimal solution in fewer iterations compared to other algorithms, indicating that the M-TSA algorithm has a fast optimality finding capability, which enhances its application to delay-sensitive vehicular networking special scenarios to compute offloading strategies. Figure 7 depicts the comparison results of offloading utility and delay for five algorithms to perform delay orientation optimization (δ t i = 1.0, δ e i = 0.0) experiments with the same experimental environment, the same task instances, and the same initial population when the number of TaV is 20 and the number of SeV is 30. Delay Orientation Optimization Test From Figure 7, it can be seen that the M-TSA proposed also has better results in calculating the offloading directed optimization delay compared with the PSO, TSA, GWO, and DE algorithms. From Figure 7a, we can see that the M-TSA algorithm has several large upward jumps relative to other algorithms, which in turn leads to better solutions. This is proof that the M-TSA algorithm has a stronger ability to jump out of the local global optimum and can continuously jump out of the local to fully search the global to arrive at the optimal computational offloading strategy. Impact of Changes in the Number of Task Vehicles in the Computational Domain This section is a simulation experiment in which the number of service vehicles (SeV) is 30, given δ t i = 0.8, δ e i = 0.2, and the optimization comparison results of offloading utility, time delay, and energy consumption for five algorithms for the different number of task vehicles in the same experimental environment and same task instance are presented. The five algorithms are run 20 times independently with the same input under a fixed number of TaV, and the average of the optimal solutions of the 20 runs of each algorithm is taken. Please refer to Tables 3-5 for the data on the impact of the number of TaV. As can be seen in Figure 8, the solution results of the M-TSA algorithm proposed are significantly better than the other four algorithms for the different number of task vehicles. It can derive a better computational offloading strategy, which enables the vehicle cooperative system to handle all computational tasks with higher offloading utility, a minimum system time delay, and a minimum system energy consumption, indicating that this algorithm is effective. Conclusions We discuss the problem of simultaneous computational offloading of multiple vehicles on a two-way straight highway in an Internet of Vehicles scenario and design a vehicle computational network model based on cloud-edge-end collaboration. The offloading utility, system time delay, and system energy consumption are the optimization objectives, and the vehicle motion characteristics and task time delay sensitivity are taken into account to make the computational offloading scheme more consistent with the actual, real situation. The simulation results show that the proposed offloading strategy can significantly improve the system task offloading utility and effectively reduce the system time delay and system energy consumption. In future research, relevant strategies will be further designed for more complex Internet of Vehicles scenarios to better match the actual situation. The proposed approach takes into account the vehicle motion characteristics and task delay sensitivity to make the computational offloading scheme more realistic, but further challenges such as dynamic changes in the vehicle network topology and unreliable connections still need to be addressed. An interesting future research direction for this work is to employ predictive algorithms to predict the location and connectivity of vehicles for more accurate design and tuning of computational offloading strategies to better match the real-world situation.
10,565.8
2023-05-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Development of a Validated UPLC-MS/MS Method for Analyzing Major Ginseng Saponins from Various Ginseng Species Ginsenosides, which contain one triterpene and one or more sugar moieties, are the major bioactive compounds of ginseng. The aim of this study was to develop and optimize a specific and reliable ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method for the analysis of twelve different resources of ginseng. The six marker compounds of ginsenoside Rb1, ginsenoside Rb2, ginsenoside Rc, ginsenoside Rd, ginsenoside Re, and ginsenoside Rg1, as well as an internal standard, were separated by a reversed-phase C-18 column with a gradient elution of water and methanol-acetonitrile. The multiple-reaction monitoring (MRM) mode was used to quantify and identify twelve market products. The results demonstrated that not only is the logarithm of its partition coefficient (cLog P; octanol-water partition coefficient) one of the factors, but also the number of sugars, position of sugars, and position of the hydroxyl groups are involved in the complicated separation factors for the analytes in the analytical system. If the amount of ginsenoside Rb1 was higher than 40 mg/g, then the species might be Panax quinquefolius, based on the results of the marker ginsenoside contents of various varieties. In summary, this study provides a rapid and precise analytical method for identifying the various ginsenosides from different species, geographic environments, and cultivation cultures. Introduction Ginseng has been used as a nutritional supplement and traditional Chinese medicine for centuries. Ginseng belongs to the Panax genus (Araliaceae family). When compared to other ginseng species, Panax ginseng C.A. Meyer, Panax quinquefolius L., and Panax japonicus C.A. Meyer are the most frequently used, and they are mainly known as Korean ginseng, American ginseng, and Japanese ginseng, respectively [1]. The first literature record of Shen-nung-ben-tsao-jing (The Holy Farmer's Material Medica) ca. 25 A.D. cited ginseng as an imperial herb that was regarded as a vital energy supplement, a sedative [2], and an antifatigue agent [3], which had nontoxic characteristics and could be administered over the long term. The medicinal parts of ginseng are the root and rhizome. studies suggest that parts of leaves [4], radix, root hairs, and berries [5] possess antiaging and antioxidant effects. Therefore, ginseng is used as a dietary supplement in daily life. Ginsenosides are the major active components of ginseng and they are generally regarded as standards for investigating the quality of ginseng herbs and complementary commercial products [6]. Several pharmacological properties of each ginsenoside, such as neuroprotective [7,8], antiinflammatory [9], and anticarcinogenic activity [10], have been shown in past research. It has also been claimed that ginseng effectively treats all types of diabetes, slows biological aging [11], slows cardiovascular disease [12], and boosts the function of the immune system [13]. The basic structure of ginsenoside is composed of a dammarane with four trans-rings of 17 carbon atoms. Ginsenosides are amphipathic, with five-carbon branched chains at C-20 and different numbers of sugar residues and hydroxyl (OH) groups at C-3, C-6, and C-20 [14]. To date, approximately 150 ginsenosides have been isolated and identified in the literature. Ginseng saponins can be divided into four structural groups: the panaxadiol group, the panaxatriol group, the ocotillol group, and the oleanolic acid group [15]. The panaxadiol and panaxatriol groups are structurally different at the sites of the sugar residues, and the panaxatriol moiety has a hydroxyl group at C-6. The panaxadiol group, including ginsenoside Rb1, ginsenoside Rb2, ginsenoside Rc, and ginsenoside Rd, and the panaxatriol group, including ginsenoside Rg1 and ginsenoside Re, contribute over 90% of the ginsenoside content of the ginseng genus [16] (Figure 1). Thus, in this study, these ginsenosides were selected as the standard for analyzing different ginseng resources. Previous studies demonstrated that many analytical approaches for the characterization of ginsenosides have been investigated, including thin layer chromatography (TLC), high-performance thin layer chromatography (HPTLC), high-performance liquid chromatography (HPLC), ultraperformance liquid chromatography (UPLC) with different detectors, gas chromatography (GC), and mass spectrometry (MS) [17] [18]. The high molecular weights, chemical diversity, and structural similarity of these compounds have hindered simple and rapid ginsenoside analysis [19]. TLC was popular in ginseng research in the 1990s. HPTLC, which is more sensitive and accurate than using TLC plates, was used in the quantitative analysis of crude ginseng drugs [20]. HPLC was another favored method for ginsenosides and it was coupled with different detectors, including UV and photodiode array (DAD) [21]. Baseline interference resulted in a weak ginsenoside signal due to the poor absorption and unsuitable wavelengths in the range of 198 to 205 nm for UV detection [22]. Although the DAD provided multiple wavelength spectra, the sensitivity was lower than that of single spectrum UV. The use of GC was restricted by the characteristics of the compounds. GC was used to detect pesticides in ginseng samples since volatile compounds may not be present in the main Previous studies demonstrated that many analytical approaches for the characterization of ginsenosides have been investigated, including thin layer chromatography (TLC), high-performance thin layer chromatography (HPTLC), high-performance liquid chromatography (HPLC), ultra-performance liquid chromatography (UPLC) with different detectors, gas chromatography (GC), and mass spectrometry (MS) [17,18]. The high molecular weights, chemical diversity, and structural similarity of these compounds have hindered simple and rapid ginsenoside analysis [19]. TLC was popular in ginseng research in the 1990s. HPTLC, which is more sensitive and accurate than using TLC plates, was used in the quantitative analysis of crude ginseng drugs [20]. HPLC was another favored method for ginsenosides and it was coupled with different detectors, including UV and photodiode array (DAD) [21]. Baseline interference resulted in a weak ginsenoside signal due to the poor absorption and unsuitable wavelengths in the range of 198 to 205 nm for UV detection [22]. Although the DAD provided multiple wavelength spectra, the sensitivity was lower than that of single spectrum UV. The use of GC was restricted by the characteristics of the compounds. GC was used to detect pesticides in ginseng samples since volatile compounds may not be present in the main ginsenosides [23]. Rapid analysis progress, and low consumption of the sample [24], especially mixtures of compounds (ginseng and its metabolites), a large proportion of ginseng analysis research is based on MS detection due to the high sensitivity [25]. MS is usually combined with HPLC or UPLC [26]. 17 articles were found while surveying the keywords of ginseng, tandem mass spectrometry, and species on PubMed. Again, surveying the keywords of ginseng, tandem mass spectrometry, and cultivation environment on PubMed, two articles were found. demonstrated that the quality of ginseng roots and rhizomes that were collected from different areas of Jilin and Heilongjiang provinces of China was different due to growing environment, cultivation technology, etc. [19]. None of these publications discuss the use of the calculated partition coefficient (cLog P) for the correction of the retention time and ginseng analytes. Our hypothesis is that the sugar position in the chemical structure of ginsenoside might affect the retention time. To prove this hypothesis, the cLog P values were calculated by the ChemDraw Professional 16 system for the analytes. The aim of this study was to develop and optimize a specific and reliable ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method for the simultaneous determination of the major saponins in P. ginseng, P. quinquefolius, and P. japonicus ginseng since little is known about the correlation of the species or cultivation environment of ginseng to the ratio of six ginsenosides in the Asian and North American ginseng. The method was applied to determine the contents of ginsenoside Rb 1 , ginsenoside Rb 2 , ginsenoside Rc, ginsenoside Rd, ginsenoside Rg1, and ginsenoside Re in twelve different ginseng sources. Optimization of UPLC-MS/MS Conditions The stock solution (100 ng/mL) of ginsenoside Rb 1 , ginsenoside Rb 2 , ginsenoside Rc, ginsenoside Rd, ginsenoside Re, ginsenoside Rg 1 , and noscapine was used to optimize the UPLC-MS/MS conditions. Both positive and negative electrospray were optimized at first. In the tuning file of the Waters Acquity UPLC TM system, the ion fragments under positive ESI conditions were more stable and display higher intensities when compared with the negative mode. Thus, a positive electrospray was applied for the following analyte identification ( Figure 2). The multiple-reaction monitoring (MRM) mode could not only detect multiple compounds in a single run, but also appeared to be highly selective for the quantification of twelve market products and their identification. The chromatographs showed high intensity, high sensitivity, and clear peak shapes. To optimize the separation of analytes, while using methanol alone, the peak shapes were not sharp enough, and a peak tailing phenomenon appeared. Therefore, a mixed organic solvent of methanol and acetonitrile at a volume ratio of 4:1 produced the best peak shape and it was selected as the organic phase ( Figure 3). Several studies comparing Asian and American ginseng ingredients via UPLC-MS have already been published; however, UPLC-MS/MS provides a higher level of sensitivity and specificity than UPLC-MS due to UPLC-MS/MS detecting signals through both parent ions and product ions, while UPLC-MS only detects the parent ions. For the analysis time, analyzing the fractions is time consuming and the elution time was 40 minutes for a single run. Furthermore, the ginsenosides of main concern were quite different [27]. Another two-dimensional LC-MS method was published for the global profiling of ginsenosides. The article also mentioned that the PDA detector was set between 190, 203, and 400 nm, which might cause inaccuracies in the quantification due to the signals at wavelengths below 220 nm [28]. Electrospray ionization (ESI) and atmospheric pressure chemical ionization, which evaporates the solvent and sample from a gas into ions, are generally used [29]. A multistage LC-MS/MS, which provides a high level of sensitivity and specificity, was chosen for use in this study. As shown, ginseng analytical methods are commonly time consuming [30]. When compared with previous reports, the advantage of this study is that focuses on the six main ginsenosides (Rb1, Rb2, Rc, Re, Rd, and Rg1), which account for 90% of the contents of the three main ginseng species. The single analysis duration was only 12 minutes, and the relationship between the structure and the retention time is discussed for different ginseng sources. Regarding the mass spectrometry conditions, the capillary voltage was downregulated from 4.0 kV to 3.8 kV, which acquired a steady high intensity. For the cone voltage, although each ginsenoside has a best cone voltage condition, in the Waters Acquity UPLC TM system, the same cone voltage had to be set for all ginsenosides. The optimized cone voltage that was provided by the Waters Acquity UPLC TM system was 40 V for all ginsenosides. We checked the cone voltages from ± 5 V (35 V-45 V), eventually using 40 V for the ginsenosides. The analytes were quantified in the MRM mode at m/z 1131.65, 365.14 for ginsenoside Rb 1 ; m/z 1101.67, 335. 13 Method Validation The calibration curves showed good linearity in the range of 10-500 ng/mL for all ginsenosides. The calibration curves and correlation coefficients (r 2 ) were as follows: y = 0.0012x − 0.001 (r 2 = 0.999, ginsenoside Rb1), y = 0.0013x + 0.0034 (r 2 = 0.999, ginsenoside Rb2), y = 0.001x − 0.0027 (r 2 = 0.999, ginsenoside Rc), y = 0.0008x − 0.0021 (r 2 = 0.999, ginsenoside Rd), y = 0.0002x − 0.0002 (r 2 = 0.999, ginsenoside Re), and y = 0.0004x + 0.009 (r 2 = 0.999, ginsenoside Rg1). The accuracy and precision were evaluated by intraday and interday assays. The RSD and accuracy were calculated to be within the range of 1.38%-6.68% and (−)8.86%-12.88% for intraday assays (Table 1), respectively, which correspond to the U.S. Food and Drug Administration bioanalytical method validation guidance. The lower limit of detection (LOD) is 10 ng/mL and lower limit of quantification (LOQ) is 25 ng/mL, which were calculated by diluting the standard solution to when the signal-to-noise ratios (S/N) of the analytes were approximately 3 and 10, respectively [31]. Several studies comparing Asian and American ginseng ingredients via UPLC-MS have already been published; however, UPLC-MS/MS provides a higher level of sensitivity and specificity than UPLC-MS due to UPLC-MS/MS detecting signals through both parent ions and product ions, while UPLC-MS only detects the parent ions. For the analysis time, analyzing the fractions is time consuming and the elution time was 40 minutes for a single run. Furthermore, the ginsenosides of main concern were quite different [27]. Another two-dimensional LC-MS method was published for the global profiling of ginsenosides. The article also mentioned that the PDA detector was set between 190, 203, and 400 nm, which might cause inaccuracies in the quantification due to the signals at wavelengths below 220 nm [28]. Electrospray ionization (ESI) and atmospheric pressure chemical ionization, which evaporates the solvent and sample from a gas into ions, are generally used [29]. A multistage LC-MS/MS, which provides a high level of sensitivity and specificity, was chosen for use in this study. As shown, ginseng analytical methods are commonly time consuming [30]. When compared with previous reports, the advantage of this study is that focuses on the six main ginsenosides (Rb 1 , Rb 2 , Rc, Re, Rd, and Rg 1 ), which account for 90% of the contents of the three main ginseng species. The single analysis duration was only 12 minutes, and the relationship between the structure and the retention time is discussed for different ginseng sources. The Correlation between the Chemical Structure and Retention Time cLog P was applied to evaluate the correlation between the retention time and the analytes to investigate the correlation between the chemical structure and separation. A lower cLog P value might represent a higher polarity, which might cause ginsenoside Rb 1 to be eluted before ginsenoside Rb 2 . Thus, the cLog P was applied to evaluate the correlation of the retention time and the ginseng analytes. The differences in the chemical structures between the panaxatriol and panaxadiol ginsenosides and the basic dammarane are the position of the sugar on C-6 and the number of hydroxyl (OH) groups on C-3. The panaxatriol group is more polar than the panaxadiol group due to the additional hydroxyl (OH) group on C-3. Thus, ginsenoside Re and ginsenoside Rg 1 have shorter retention times than the others. Comparing ginsenoside Rb 1 and ginsenoside Rb 2 , the functional groups on the R2 site were glucose and arabinose, respectively, which caused a lower cLog P value for ginsenoside Rb 1 . The lower cLog P value represented a higher polarity, because ginsenoside Rb 1 eluted before ginsenoside Rb 2 . Ginsenoside Rb 2 (pyranose form at the R2 site) and ginsenoside Rc (furanose form) are isomers, and their retention times can also be explained by cLog P. While the compounds have similar structures, the form of arabinose was not the same. Ginsenoside Rd possesses less sugar residues at C-20 when compared with other panaxadiol group ginsenosides; therefore, it is less polar than other ginsenosides and it has the longest elution time ( Table 2). The results demonstrated that cLog P is not the sole factor, and the number of sugars, position of the sugars, and position of the hydroxyl groups were involved in the complicated separation factors for the analytes in the analytical system. These results are consistent with a previous report [32], showing that the partition coefficient is one of the factors that correlates with the retention time as well as the rest of the factors, such as the number of functional groups or the position of the functional groups. Ginseng Sample Quality According to Table 3, each species contained a consistent amount of each ginsenoside compound. When compared to the ginseng sample C (Korean ginseng) and ginseng sample F (Japanese ginseng), P. quinquefolius (sample G) had the highest content of ginsenoside Rb 1 , ginsenoside Rd, and ginsenoside Re, but the lowest amount of ginsenoside Rb 2 and ginsenoside Rg 1 . P. ginseng had the highest concentrations of ginsenoside Rb 2 (around the average of 3.78 mg/g) and ginsenoside Rg 1 (average of 6.4 mg/g), which was the opposite result to that found in P. quinquefolius L. Our results are in agreement with those of the Chen et al report, where ginsenoside Rb 2 and ginsenoside Rc were more likely to be present in Asian ginsengs, whereas ginsenoside Rb 1 and ginsenoside Rd tended to be present in American ginsengs [33]. P. japonicus had the lowest contents of ginsenoside Rb 1 and ginsenoside Rd, and ginsenoside Re was undetectable in sample E and sample F. The compositions of the ginsenosides in P. ginseng and P. japonicus were more similar, but Japanese ginseng (sample E and sample F) had obviously low concentrations of ginsenoside Rd. Figure 4 provides the three ginseng species profiles, including samples A, F, and J, representing P. ginseng, P. japonicus, and P. quinquefolius, respectively. Figure 4 indicates that the content of ginsenoside Rb 1 in P. quinquefolius was far above that in other species, and it was difficult to detect ginsenosides Re and Rg 1 in P. japonicus. Table 3 also shows that the contents of ginsenosides in sample I were far below those in the other American ginseng samples, and these differences might be attributed to the cultivation environment. Wild-simulated samples displayed the difficulties that are typically shown by traditional Chinese medicines regarding consistent quality. The method that is provided in this study will be a good approach for evaluating the quality of ginseng. Different culturing areas may provide distinct soil conditions, temperate climate, humidity, and even altitude, all of which will affect the contents of ginsenosides. Woodcultivated Ginseng Sample Quality According to Table 3, each species contained a consistent amount of each ginsenoside compound. When compared to the ginseng sample C (Korean ginseng) and ginseng sample F (Japanese ginseng), P. quinquefolius (sample G) had the highest content of ginsenoside Rb1, ginsenoside Rd, and ginsenoside Re, but the lowest amount of ginsenoside Rb2 and ginsenoside Rg1. P. ginseng had the highest concentrations of ginsenoside Rb2 (around the average of 3.78 mg/g) and ginsenoside Rg1 (average of 6.4 mg/g), which was the opposite result to that found in P. quinquefolius L. Our results are in agreement with those of the Chen et al report, where ginsenoside Rb2 and ginsenoside Rc were more likely to be present in Asian ginsengs, whereas ginsenoside Rb1 and ginsenoside Rd tended to be present in American ginsengs [33]. P. japonicus had the lowest contents of ginsenoside Rb1 and ginsenoside Rd, and ginsenoside Re was undetectable in sample E and sample F. The compositions of the ginsenosides in P. ginseng and P. japonicus were more similar, but Japanese ginseng (sample E and sample F) had obviously low concentrations of ginsenoside Rd. Figure 4 provides the three ginseng species profiles, including samples A, F, and J, representing P. ginseng, P. japonicus, and P. quinquefolius, respectively. Figure 4 indicates that the content of ginsenoside Rb1 in P. quinquefolius was far above that in other species, and it was difficult to detect ginsenosides Re and Rg1 in P. japonicus. Table 3 also shows that the contents of ginsenosides in sample I were far below those in the other American ginseng samples, and these differences might be attributed to the cultivation environment. Wild-simulated samples displayed the difficulties that are typically shown by traditional Chinese medicines regarding consistent quality. The method that is provided in this study will be a good approach for evaluating the quality of ginseng. Different culturing areas may provide distinct soil conditions, temperate climate, humidity, and even altitude, all of which will affect the contents of ginsenosides. (A) In general, five geographical species provide the total ginseng supply: South Korea (P. ginseng), China (P. ginseng), Japan (P. japonicus), Canada (P. quinquefolius), and the United States (P. quinquefolius) [1]. For high-quality ginseng production, ginseng requires a cool and temperate climate and special soil conditions, including soil that is nutrient rich, slightly acidic, well drained, and under deep shade [34]. Three different ginseng cultivation methods were analyzed in this study: truly wild, wild simulated, and wood cultivated. In truly wild cultivation, the plants were free to grow without any management by humans. In wild-simulated cultivation, the natural conditions of temperature and humidity for ginseng growth were replicated, but the soil conditions could not meet the requirements. In the wood-cultivated method, ginseng was grown with intensive field plowing and artificial shade structures. Except during the winter, the ginseng beds were protected and covered with floating plastic to create shade, strengthen photoselectivity, and protect against heavy rains [35]. In the study by Chen et al (2019), the average contents of ginsenosides indicated that the distinct volcanic pumice soil conditions of New Zealand might be more suitable for ginseng cultivation than original native locations (China and Korea) [36]. Different ginseng processing methods and cultivation environments [37] may be required due to variations in the amounts of ginsenosides [38]. P. ginseng (Korean ginseng), which is mainly known as red ginseng and white ginseng, has a distinct In general, five geographical species provide the total ginseng supply: South Korea (P. ginseng), China (P. ginseng), Japan (P. japonicus), Canada (P. quinquefolius), and the United States (P. quinquefolius) [1]. For high-quality ginseng production, ginseng requires a cool and temperate climate and special soil conditions, including soil that is nutrient rich, slightly acidic, well drained, and under deep shade [34]. Three different ginseng cultivation methods were analyzed in this study: truly wild, wild simulated, and wood cultivated. In truly wild cultivation, the plants were free to grow without any management by humans. In wild-simulated cultivation, the natural conditions of temperature and humidity for ginseng growth were replicated, but the soil conditions could not meet the requirements. In the wood-cultivated method, ginseng was grown with intensive field plowing and artificial shade structures. Except during the winter, the ginseng beds were protected and covered with floating plastic to create shade, strengthen photoselectivity, and protect against heavy rains [35]. In the study by Chen et al (2019), the average contents of ginsenosides indicated that the distinct volcanic pumice soil conditions of New Zealand might be more suitable for ginseng cultivation than original native locations (China and Korea) [36]. Different ginseng processing methods and cultivation environments [37] may be required due to variations in the amounts of ginsenosides [38]. P. ginseng (Korean ginseng), which is mainly known as red ginseng and white ginseng, has a distinct preparation method. A steaming and drying process is used to produce red ginseng. The steaming step might elevate the contents of ginsenosides in the panaxadiol and panaxatriol groups [39]. American ginseng is mostly cultivated in Canada and the northern United States. However, due to a supply shortage and price fluctuations, American ginseng has recently been cultivated in northern China. The contents of the ginsenosides might change accordingly. Therefore, the relative proportions of ginsenosides might indicate the market type of ginseng. Origin of Ginseng Samples Twelve different market ginseng samples were analyzed in this experiment (Table 3). Four different P. ginseng sources were obtained and two P. japonicus were analyzed. All of the market P. ginseng and P. japonicus were wood cultivated. Six P. quinquefolius samples were provided for this experiment, including truly wild, wood-cultivated, and wild-simulated market ginseng. Dr. Wen-Ya Peng identified the origins of ginseng material samples according to her medical doctor specialty. Preparation of Standard Solutions Standard stock solutions of ginsenoside Rb 1 , ginsenoside Rb 2 , ginsenoside Rc, ginsenoside Rd, ginsenoside Re, ginsenoside Rg 1 , and noscapine were prepared at a concentration of 1 mg/mL in methanol. All of the stock solutions were stored at −20 • C before use. Preparation of Ginseng Samples The ginseng (root part) was cut into pieces and weighed; then, the ginseng slice (1 g) was soaked in a tube with 6 mL of 50% ethanol for 30 min. The tube was then incubated and sonicated at 37 • C for 6 h. The samples were filtered through a 0.22 µm Millipore filter. A rotary evaporator was used to ensure that the ethanol residue was eliminated. Subsequently, the sample was freeze-dried to obtain a powder. The ginseng-extracted powder was added to 50% methanol to obtain a 1 mg/mL concentration, centrifuged at 13,000 rpm for 5 min. (ThermoFisher FRESCO 17, Dreieich, Germany), and then filtered through a 0.22 µm Millipore filter. To meet the range of calibration curves, all the samples were diluted 50-100X with 50% methanol before analysis. All analytes were prepared as above before injection into the UPLC-MS/MS for analysis. Instruments and Conditions The UPLC-MS/MS system consisted of a Waters Acquity UPLC TM system (Waters Co., Milford, MA, USA), a binary solvent manager, an automatic liquid chromatographic sampler, and a Waters Xevo TM tandem quadrupole mass spectrometer equipped with an ESI source. All ion transitions and collision energies were determined and optimized while using the MassLynx 4.1 software data platform (Waters). The mass spectrometry conditions were set, as follows: ESI, positive mode; desolvation temperature, 550 • C; collision gas, argon; nebulizing gas, nitrogen; source temperature, 150 • C; desolvation gas flow, 800 L/h; capillary voltage, 3.8 kV; and, cone gas flow, 60 L/h. The optimized cone voltage was 40 V for all of the ginsenosides. Figure 2). Noscapine was used as the internal standard (IS) for these analytes. The analytical column for ginsenoside separation was a Purospher ® STAR RP-18 end-capped column (100 × 2.1 mm, 2 µm, Merck KGaA, Darmstadt, Germany) maintained at a temperature of 40 • C in the column oven. Mobile phase A consisted of triple-distilled water and mobile phase B consisted of acetonitrile: methanol 1:4 (v/v). The mobile phase was filtered through a 0.22 µm filter and then degassed by a sonicator for one hour before use. The gradient elution was 80% to 5% A in 0.01-10 minutes and back to 80% A in 10-15 minutes. The flow rate was 0.2 mL/min., and the injection volume was 10 µL. The MassLynx 4.1 software (Waters Corporation, Milford, MA, USA) data platform was used for spectral acquisition, spectral presentation, and peak quantification. Validation of the Analytical Method The method was validated while using current U.S. Food and Drug Administration bioanalytical method validation guidance. The accuracy, precision, and calibration curves were evaluated. The intraday variability was determined by quantifying six repetition sets of different concentrations on the same day. The accuracy (bias) was calculated from the nominal concentration (C nom ) and the mean value of observed concentrations (C obs ) while using the following formula: Bias (%) = [(C obs − C nom )/C nom ] × 100. The precision, as the relative standard deviation (RSD), was calculated, as follows: RSD (%) = [standard deviation (S.D.)/C obs ] × 100. All of the linear calibration curves were required to have a coefficient of estimation of at least 0.995. Conclusions This study provided methods for evaluating the quality of the traditional Chinese medicine ginseng. A rapid and validated UPLC-MS/MS detection system was developed for the simultaneous determination of ginsenoside Rb 1 , ginsenoside Rb 2 , ginsenoside Rc, ginsenoside Rd, ginsenoside Re, and ginsenoside Rg 1 . The method was successfully applied to the analysis of twelve different ginseng sources. The results of this study can be used to determine the tendencies of the ginseng species.
6,336.4
2019-11-01T00:00:00.000
[ "Chemistry" ]
Deep regression with ensembles enables fast, first-order shimming in low-field NMR Shimming in the context of nuclear magnetic resonance aims to achieve a uniform magnetic field distribution, as perfect as possible, and is crucial for useful spectroscopy and imaging. Currently, shimming precedes most acquisition procedures in the laboratory, and this mostly semi-automatic procedure often needs to be repeated, which can be cumbersome and time-consuming. The paper investigates the feasibility of completely automating and accelerating the shimming procedure by applying deep learning (DL). We show that DL can relate measured spectral shape to shim current specifications and thus rapidly predict three shim currents simultaneously, given only four input spectra. Due to the lack of accessible data for developing shimming algorithms, we also introduce a database that served as our DL training set, and allows inference of changes to 1 H NMR signals depending on shim offsets. In situ experiments of deep regression with ensembles demonstrate a high success rate in spectral quality improvement for random shim distortions over different neural architectures and chemical substances. This paper presents a proof-of-concept that machine learning can simplify and accelerate the shimming problem, either as a stand-alone method, or in combination with traditional shimming methods. Our database and code are publicly available. Introduction In recent decades, deep learning (DL) [1] has shown unprecedented achievements in various industrial and scientific fields. Although areas of research such as computer vision or natural language processing have become commonplace, many fields of science have only recently begun to exploit the vast possibilities that DL can provide. One such area is nuclear magnetic resonance (NMR) spectroscopy, a non-destructive technique widely used in chemistry, physics and medicine to study the properties of liquid or solid samples. The most notable contributions from DL currently employed in NMR include, but are not limited to, methods for reconstruction of non-uniformly sampled (NUS) spectra [2] and truncated free induction decays (FIDs) [3], chemical shift prediction [4], or denoising and segmentation of magnetic resonance images [5,6]. Other possible applications of DL for solving challenges in NMR are also discussed and suggested in the community [7]. Most of these approaches and suggestions are being applied at the post-processing stages of NMR spectroscopy, taking the hard-ware setup for the NMR measurement as granted. In contrast, we propose to optimize the preparation preceding an NMR measurement with deep learning methods. One crucial parameter of the NMR setup, which requires the most careful and precise tuning for subsequent successful measurements, is the homogeneity of the magnetic field. The strength of the magnetic field B 0 directly influences the precession frequency f of each stationary spin with its gyromagnetic ratio c, as described by the well-known Larmor equation 2pf ¼ ÀcB 0 . For a perfectly uniform field and a chemically homogeneous sample, the observable signal in the frequency domain yields a single Lorentzian peak centered at f. However, in an inhomogeneous magnetic field B 0 , the spins experience different local fields due to various effects (e.g. susceptibility differences between material inside the field, manufacturing inaccuracies of the coil, or intramolecular shielding effects in the sample itself), and thus possess frequencies shifted around the central Larmor frequency. This means that inhomogeneities broaden the line shapes of the spectrum, decrease their amplitude and consequently decrease the signal-to-noise ratio (SNR). Furthermore, the non-bijective mapping of the FID from a three-dimensional volume to a onedimensional spectrum introduces ambiguities, i.e., the connection between the location of each distortion and its direct impact on the spectrum is lost during the acquisition process (Fig. 1). Especially with recent efforts in miniaturizing NMR technologies [8], signal sensitivity increases, but field inhomogeneities remain hard to eliminate. To overcome distortions in the measured spectrum caused by a magnetic field inhomogeneity, a method referred to as "shimming" is used. Modern shimming is best described as the procedure of superimposing a secondary correction magnetic field by adjusting the currents in a finite set of field-orthogonal coils (the so-called shim coils) to correct for inhomogeneities in the magnetic field B 0 . Modern NMR spectrometers usually require the magnetic field to be uniform in the range of parts per billion (ppb). However, the shimming procedure is often a tedious and painstaking process that demands extensive time and experience from the operator [9]. This is due to the large number of shim coils required for spectroscopy, the inter-dependence of shim field patterns (violations of orthogonality), and the lack of a straightforward solution for the correct shim coil values in an (expected) non-convex solution space. This situation makes it a challenge to provide a correct, fast, and reliable shimming procedure for the magnetic field, especially on the order of a few seconds. Several approaches already exist to solve this problem and automate the shimming procedure to increase spectral quality. The most robust approaches to date utilize the downhill simplex (or Nelder-Mead) method [10,11], and adaptations thereof [12]. Also, automated shimming based on a lock channel is widely used, e.g. for continuously adapted shimming during lengthy NMR measurements. Unfortunately, with large initial field inhomogeneities, locking may be impossible due to its inherently low SNR. A significant contribution to shimming utilizes gradient shimming [13,14]. For this, ideally, rapidly switchable gradient coils would be necessary, which may not be available. Also, the B 0 field maps could be distorted [15], requiring appropriate corrections and potentially repetitions. Despite emerging solutions [16][17][18], signal-based methods remain significant and can automatically improve spectral quality if sufficient runtime is provided. Nevertheless, the achieved improvements are often not perfect, and manual refinement is necessary due to imperfect hardware, non-orthogonal or dependent shims [19], and sensitivity to starting values and step sizes for regular shimming methods. We propose to fill this gap by utilizing a deep learning (DL) algorithm to guide the shimming process. As a tool, DL has demonstrated great success over various domains by end-to-end learning, i.e., an algorithm learns, given only input and target, to automatically detect features. In general, the feature detection is enabled by combining representations in multiple layers based on representations from the previous layer, where each layer represent more abstract feature levels [1]. With sufficient capacity, DL can thus learn arbitrary complex functions of high-dimensional input, and still allows for fast inference. Therefore, we hypothesize that it is possible for DL to learn shim values given 1D signals drawn from 3D space, even when the shim values do not directly correspond to visible features in the signal. Accordingly, we utilize supervised deep regression due to its capability to automatically detect non-linear relations between high-dimensional input data and numerical targets with high performance. We furthermore merge deep regression with the idea of ensembles, by combining multiple weak models to reduce prediction variance. We focus on a non-iterative method for initial shimming to rapidly reach a state near the global optimum. Our scenario assumes shimming a probe from scratch with first-order shims, and beneficial use cases are the acceleration or improvement of existing automated shimming methods, focusing on highthroughput NMR alone or in conjunction with miniaturized hardware [20], without the use of gradients or even a lock channel. For this, we have generated a publicly available database for first-order NMR shimming that allows inference of spectral changes depending on shim offsets. We utilize the database to train a set of deep regression models (so-called weak learners) that can simultaneously predict three first-order shim currents given four distinct NMR measurements: the current unshimmed spectrum, and three spectra with individually modified shim values. The weak learners are then combined in an ensemble, via a metamodel, to increase prediction stability, and the performance is analyzed in situ for different evaluation metrics. Furthermore, we conduct limited comparison with regular shimming based on the downhill simplex method. In summary, our paper makes the following contributions: Creation of the first spectral database (ShimDB) dedicated to low-field NMR shimming 1 ; Proof-of-concept for utilizing deep learning for shimming; Establishment of a method for rapid shimming based on deep regression with ensembles (DRE) 1 , shown in Fig. 2. Note that we only use information learned from data, i.e., we employ a knowledge-based approach. We also do not require priors, or mathematical formulations of spatial shim functions, as would be required for gradient shimming. The rest of the paper is organized as follows. Section 2 provides an overview of related work on automated shimming, deep regression, and ensembles. In Section 3, we introduce our database, and in Section 4, our method. In Section 5, we describe our experiments for both DL training and in situ deployment. We discuss our approach in Section 6 and conclude in Section 7. Fundamentals and related work We start by unfolding the shimming problem and highlighting some of the successful or recent approaches. We also describe relevant advances in deep learning that we adopted in our method. The non-bijectivity between the one-dimensional spectrum and its three-dimensional origin introduces additional ambiguities. The two cubes indicate differing spatially varying field strengths in the region of interest, as caused by inhomogeneities, that nevertheless give cause to similarly shaped spectra. Automated shimming The shimming problem. The nuclear spin ensemble precession frequencies differ depending on the locally experienced and spatially varying field strengths of an inhomogeneous magnetic field Mathematically, the shimming algorithm finds the scalar weights w 1 ; w 2 ; . . . ; w n ð Þ for n shim currents such that the corrected magnetic field B ! à 0 is as uniform as possible: The basis set of (ideally orthogonal) spatial functions S ! i ; i 6 n 2 N, and their real scalar weights w i are adjusted to reproduce and thus cancel the field inhomogeneities D B ! 0 . Each S ! i represents a specific shim coil (or spatial shim coil function) with its current w i . An example of the shimming procedure for an arbitrary axis in space is given in Fig. 3. Magnetic field inhomogeneities cause distortions in the spectrum, which can be canceled by sequentially adding spatial shim functions S ! i with the correct weights w i . The concept of homogenization by superimposing correction fields was developed by Golay in 1958 [22]. In general, according to their optimization objective, automated shimming methods can be separated into signal-based and fieldbased methods. Field-based methods, or gradient shimming [13], use B 0 field maps acquired by gradient-echo imaging sequences to calculate an optimal combination of basis functions to cancel inhomogeneities. We focus on iterative signal-based shimming, which optimizes a scalar quality criterion based on signals such as the FID, lock channel, or spectral lineshapes. Moreover, we can neglect theoretical limitations of spatial functions and directly optimize for the shim currents. 1-D signal-based shimming. These are based on 1-D search algorithms, covered by the Tuning [23] or Coggins [24] algorithms, and are characterized by optimizing one variable at a time. The methods all share the procedure of repeatedly comparing three spectra until the minimum quality criterion of choice can be approximated by fitting a parameterized parabolic curve. Adjusting one shim at a time is simple and often faster than other methods, but does not incorporate dependencies between shim values. This is why it often has to be iterated. N-D signal-based shimming. In n-dimensional optimization, a group of n variables are adjusted simultaneously. For this, the downhill simplex method [11], or its modifications [12], are usually used. The methods commonly use a geometrical polytope (a ''simplex") of n þ 1 vertexes, where each vertex is represented by the quality criterion corresponding to specific shim settings. The simplex then evolves through solution space using geometrical . Three-dimensional distortions (illustrated as a 3D "inhomogeneity cube") of the sample volume collapse to a onedimensional signal. By systematic offsets of the available shim currents, a batch of distinguishable spectra is obtained, which serves as input to a deep neural network. The prediction contains the shim values to achieve a more homogeneous field and thus a spectrum of higher quality. Fig. 3. Principle of shimming along the z-axis using higher-order shims. Starting from an unshimmed spectrum with unknown field inhomogeneities DB0, correction fields induced by the shim coils (dashed lines) and their correct weights w i must be selected sequentially such that the inhomogeneities are canceled. The quality of the final spectrum is usually specified by the full width at half maximum (FWHM). Layout inspired by [21]. operations such as reflection, expansion, and contraction of the simplex, based on the worst, average and best quality criterion, until a local minimum is reached, evidenced by vertices of similar quality criterium. Note that the Nelder-Mead simplex method [10] applicable in shimming should not be confused with the simplex algorithm of Dantzig for linear programming [25]. The major limitation of the downhill simplex method is its slow convergence speed [26]. Several improvements, such as quasigradient methods [27], adaptive shrinking coefficients [28] or perturbed centroids [29], are combined in [12] to enhance NMR shimming. Other methods, such as modified steepest descent [11], or the rapid, modified simplex method proposed by Webb et al. [30], also enable n-dimensional shim optimization. Other approaches. The main idea behind the method introduced by Michal [19] is to orthogonalize the shim coil gradients such that the optimization of one shim becomes independent of the others. With truly-orthogonal ''composite shims", finding a global minimum becomes a one-dimensional calculation. The method brings various limitations: gradients show non-linear behaviour due to current supply, heating, or other components, and thus affect the symmetry required to calculate composite shims. Deep learning methodologies We now consider deep learning methods, a branch of machine learning, which learns representations from a set of prior data by abstraction at multiple levels. Deep learning overview. Deep neural networks are most commonly implemented as stacked layers of artificial neurons, where the weight of each neuron is updated (or learned) with backpropagation [31] and gradient descent, to minimize a loss function on given training data. However, generalisation is judged by the prediction performance on previously unseen (out-of-sample) data. Therefore, different regularization techniques can prevent overfitting or unwanted memorization of the training samples. For example, early stopping of the training process is used when the training and test errors diverge, dropout [32] randomly shuts down neurons during weight update, and augmentation is used to increase the amount and variance of data. The usage of non-linear activation functions, such as the rectified linear unit (ReLU) [33], with a sufficient number of processing layers (depth), allows neural networks to approximate any arbitrary function. Furthermore, different connection patterns, e.g. each-to-each (fully-connected), of the neurons in and between each layer, allow for efficient prediction on different data structures (e.g. sequential or image-like). We refer to [1] for a more detailed description of the entire DL idea. Deep Regression. The generalization of standard regression (i.e. prediction of a single continuous variable) to multiple, possibly interdependent variables is often referred to as multi-target regression [34]. When deep neural networks replace the function approximation, one commonly refers to deep regression. In [35], vanilla deep regression is coined to refer to convolutional neural networks (CNNs) with a linear last layer. Spectral reconstruction [2] and denoising methods [5] in NMR can be seen as regression methods, where input and output shapes are similar. Regression without convolutions is used to predict chemical shifts in [36] or [37]. However, in both cases, it only predicts a single target. We also note with interest all the advances made for successful deep regression. Nevertheless, to allow for exploration of DL in the shimming problem, we first investigate vanilla models that require fewer assumptions, which were found to yield comparable results for complex regression models in computer vision experiments [35]. 1D Convolutions. From [38], we adopt the idea of interpreting NMR spectra in the frequency domain as 1D-images, in order to apply CNNs and developments from computer vision. The concept behind convolutional layers, inspired by the virtual cortex, is to sweep filter kernels over a grid-like input to generate representations of the next layer, instead of direct links used in fullyconnected layers. CNNs incorporate parameter sharing and sparse connectivity to decrease memory requirements and allow predictions independent of the features' locations [39]. We also extend this idea by using multiple input spectra as a batched input to our model, similar to RGB channels of an image. A comprehensive overview of the capabilities of one-dimensional CNNs, which are applied to NMR spectra in this paper, is given in [40]. The possibility to visually relate specific distortions in the NMR spectrum to specific shim currents [41] supports our goal of automating the shimming process with DL. Ensemble methods. Ensemble methods in machine learning combine multiple models to construct a more powerful model to achieve higher accuracy or lower variance in predictions [42,43]. In general, ensembles consist of two levels: multiple weak learners (level-0), and a combination of their predictions (level-1), often represented by a meta-model. Several forms, such as bagging [42], boosting [44], or stacking [45] can be distinguished, and they differ in data handling or training of the different levels' models. Some applications of ensembles-to-regression are summarized in [46]. In the field of NMR, ensembles have already been used to predict fish sizes from metabolomic profiles using spectral data [47]. First-order shimming dataset The success of deep learning algorithms strongly depends on the quantity and quality of available data, which should represent the task as accurate as possible without introducing biases. Thus, the first step in the development of DL-assisted automatic shimming was the creation of a database for algorithm training. The shimming database (ShimDB) 2 is a collection of proton NMR signals recorded under the application of shim coil fields and so far contains a subset called LinearShimDB, a small-scale dataset containing over 9000 instances with only linear shim offsets. It allows inference of changes to the NMR spectrum or free induction decay (FID) depending on linear shim offsets. Each data instance includes the following information: A binary file containing the raw 1 H-FID with dimensions 1  32768; the shim values 2 À2 15 ; 2 15 h i for n shims; the acquisition parameters; and the processing parameters. We pretend that the spectrometer only has n ¼ 3 first-order shims, thereby only X; Y, and Z shim values are non-zero, but this is easily extended. The measured sample consists of distilled water mixed with copper sulfate to reduce spin-lattice or longitudinal relaxation time T 1 , allowing for faster database acquisition. In analogy to the study [48], the 50 ml H 2 O is mixed with 0,062 g CuSO 4 +5H 2 O (CAS No. 7758-99-8), resulting in a concentration of 5 mmol/L CuSO 4 . With inversion recovery experiments we find that T 1 % 290 ms. The data was acquired on the low-field Magritek Spinsolve 80 Carbon spectrometer (Magritek GmbH, Aachen Germany, [49]) with a 1 H frequency of 80 MHz using standard 5 mm sample tubes and the Spinsolve-Expert software. Experimental parameters, and the dataset's characteristics, are summarized in Table 1. A large reception bandwidth was chosen because the initial line shape is unknown when using large shim offsets. Also, the frequency lock is not activated so that the signal potentially could leave the field of view. The acquisition procedure of the LinearShimDB subset was as follows. The manufacturer's automated shimming technique, based on the downhill simplex method [50], was used to obtain a reference spectrum of decent quality. Then, all shim values except the three linear shims X; Y and Z were set to a current of zero Ampére. The resulting spectrum and corresponding shim settings y r were used as the reference values. The database parameters were obtained by relative, systematic offsets Þfrom the reference shim values in a range R with step size s, where a; b; c 2 À R s ; R s  à change in a grid-like manner. R is chosen large enough to mimic shimming a probe from scratch. For each combination, the raw FID, acquisition parameters, and shim values were stored. 3 We believe that our data can successfully be reused for training ML models on different setups and scenarios by transfer learning or domain adaptation techniques [51]. Deep regression with ensembles for shimming In this section, we define the mathematical problem and introduce our network structures for deep regression, where we differentiate between weak learners (level-0), and the meta-model (level-1). Furthermore, we introduce our performance metrics. Problem definition in terms of DL and NMR including the input x 2 R WÂ4 with dimensions W  4, where W is the input's width, and the associated target y ¼ y 1 ; y 2 ; . . . ; y n ð Þ 2 R n , defined as a real-valued vector of n elements, with n being the number of separate shim coils. Also, consider the regression model F h Á ð Þ, represented by a deep convolutional neural network with parameters h. The network parameters h are learned in a supervised manner using the database D in order to minimize the mean squared error (MSE) between the predictionŷ ¼ F h x ð Þ and the target y. In terms of NMR, the predictionsŷ translate/map to the shim values w i and should solve Eq. 1. The inputs x are defined as , where the unshimmed spectrum u changes as a function of systematic shim offsets s. The DL model predicts the shim correction terms F h x ð Þ ¼ŷ X ;ŷ Y ;ŷ Z ð Þ , such that y i Àŷ i % 0. Note that we do not have access to either S i , nor the magnetic field B ! 0 . We also compare the mean absolute error (MAE) to measure in situ shimming performance between the predicted and relative offset values of the reference shims (see Section 3) with MAE ¼ 1 n P n i¼1 jy i Àŷ i j. Our approach stems from the concept of deep regression in a multi-input and multi-output setting. In our scenario, the model F is an ensemble of weak learners combined with a multi-layer perceptron (MLP) as the meta-model, as proposed in subsection 4.2. Deep learning architectures and pipeline Level-0. Unlike in computer vision, in NMR there is no bijective mapping between the input to the model and its output (i.e. prediction at the pixel level of the input). Indeed, the shimming problem starts with a field inhomogeneity in 3D space, translates this into a 1D NMR signal, after which there is no straightforward link back to the correct shim currents. Therefore, we provide the weak learners with a batch of four NMR spectra x as input and predict a three-valued vector of continuous valuesŷ, i.e., the shim values. Each architecture begins with 3 À 5 blocks of one-dimensional convolutional layers, with varying kernel sizes, followed by two fully connected layers with 32 nodes. Heterogeneous architectures allow for higher variance in predictions that are useful for the ensemble model. Additionally, we include dropout layers after convolution and fully-connected layers to prevent overfitting [32]. The last layer uses linear activation for regression of the targets. The architecture is illustrated in Fig. 4a. Level-1. The meta-model combines features of its m heterogeneous weak learners, and represents a mixture of stacking and boosting (see subsection 2.2). We investigate the following forms: Simple average over all weak learner predictions. Non-linear combination, with a fully-connected layer, of the weak learner's regression layer (2 R mÂn ). A two-layer multi-layer perceptron (MLP) based on the secondto-last fully-connected layer of the level-0 models. The MLP has m  32 nodes in its first, and 32 nodes in its second layer. Also refer to Fig. 4b for a visualization of the ensemble with an MLP-based meta-model. Spectral quality and performance metrics To judge the quality of spectra, we introduce a criterion c that can be used for global or local quality judgements of single peaks. For a spectrum g of interest and a reference spectrum r, it is defined as: where max Á ð Þ is the maximum peak height and FWHM Á ð Þ is the full width at half maximum of that peak. k i can be used to increase or decrease the impact of each term. The quality parameter c indicates whether spectrum g is worse (0 < c < 1) or better (c > 1) than the reference r. Other spectral properties, such as peak symmetry, can easily extend the criterion. But since linear shims cannot lead to symmetrical line shapes, we have refrained from more precise quality measures such as the envelope of a spectral peak [52]. An extension to multiple peaks could be realized via virtual peaks [53]. Furthermore, the following metrics are introduced to analyze the performance of our method in laboratory experiments for the n ¼ 3 first-order shims: Success rate SR 2 0; 1 ½ . We defined the SR w.r.t. the criterion c. If the predicted shim setting yielded c higher than all c in the input batch, then the method was deemed successful. For a single experiment, it was defined as: Table 1 Characteristics and acquisition parameters of the first-order shimming dataset (LinearShimDB). Characteristics Nr. spectra 9261 Shim range R AE10000 Step size s 1000 Shims X; Y; Z where c init ; c X ; c Y ; c Z ½ are the quality values for the input batch and c sh is the criterion after shimming with DRE. Correct direction ratio DiR 2 0; 1 ½ . Indicator whether the method pointed towards the global minimum and equals to 1 if the predicted signs matched the distortion's signs: where sgn Á ð Þ is the sign function,ŷ i is the model's prediction and y i is the true distortion. Mean improvement of criterion c (see Eq. 2) with k 1 ¼ k 2 . Given in percentage for c g; r ð Þ, where g is the spectrum with predicted correction and r is the initial spectrum. Averaged MAE between predictions and random distortion. Generalization to other substances. Experiments In this section, we describe prerequisites in pre-processing, hardware, and DL training, followed by our evaluation protocol, and the results for offline and in situ (online) experiments. Setup or implementation details Dataset creation and pre-processing. Each member of a batch of input spectra x consists of one spectrum corresponding to a unique target valueŷ, and three spectra with offsets of s in the X; Y, and Z shims, respectively. The database D with input-target pairs x; y ð Þ i is constructed by mining the dataset LinearShimDB from Section 3. Each raw FID is fast-Fourier-transformed and phase-corrected to yield a 1D spectrum, using nmrglue [54] and the same phase correction values given by the system's auto-phase method. Note that the unique target values y for each input x is not represented by its absolute shim values, but is defined by its relative distortion w.r.t. the reference spectrum of D. This prevents the model from learning absolute shim currents that would depend on the hardware that the database was acquired on, and forces the model to learn the relative shim offsets that will improve a given spectral shape. In order to achieve faster convergence and generalization, our data is further pre-processed. Each spectrum is normalized by a constant normalization factor of 1e5 (the maximum intensity for perfect shims) such that the spectral values lie within the range 0; 1 ½ . To meet resource constraints, all spectra are downsampled from 32768 to 2048 data points. The regression targets (stored as int16 integers) are divided by 2 15 to avoid exploding gradients, and then multiplied by 100 to avoid vanishing gradients during DL model training. We exclude dataset samples where offsets are incomplete. Thus, the final subsets for training, validation, and test are of size 6400=800=801. Due to time constraints during the acquisition of D in a grid-like manner (Section 3), we expect that the spectra of each instance x; y ð Þ i inhibit some reality gap and temporal equalization. Therefore, we introduce an additional transfer database T with jT j ¼ 100 to differentiate from the systematic nature of data collection. T is obtained under the same conditions as D, but each spectrum of x is jointly acquired and is used to either fine-tune the weak learner or the meta-model. Hardware interface. Communication with the Magritek Spinsolve spectrometer was enabled through a custom interface between Python and the python-like programming language Prospa, upon which the Spinsolve-Expert software is built. With this interface, it is possible to benefit from open-source python libraries for NMR data processing (nmrglue [54]) and deep learning frameworks (PyTorch [55], and ray tune [56]). The standard shimming routines used by the spectrometer's software are based on parabolic interpolation and the downhill simplex method, and the latter is adopted for comparison. DL training details. Level-0 base models underwent a limited neural architecture search (NAS), a technique for automating the neural network architecture design [57], using ray tune [56], and random search over a variable number of layers ', kernel sizes, and other design choices. All weak learners were trained with a learning rate of 0:001, batch size of 32, and the Adam optimizer [58] for a maximum of 150 epochs utilizing early stopping. We selected the top-50 architectures among 300 runs w.r.t. validation error. The level-1 meta-models were trained with hyperparameter optimization (HPO) using random search and early stopping. We manually selected the best fully-connected and MLP-based networks w.r.t. their validation loss over 500 run. For a detailed description, see subsection S1.2. Hardware requirements. The DL training was performed with an AMD Ryzen 5900X equipped with 64 GB RAM, and a graphics processing unit NVIDIA GeForce RTX 3090. The LinearShimDB roughly requires 3:4GB of disc space. In situ evaluation protocol We demonstrate in situ functionality by testing the method on a set of 100 random distortions y X ; y Y ; y Z 2 À10000; 10000 ½ of X; Y; Z shims, drawn from a uniform distribution. We deliver success rate SR, direction rate DiR, mean improvement of our quality parameter c, and MAE over five different model types, including single weak Level-0 training results Offline experiments, i.e., training on static data, show that it is possible to predict three distinct variables from an input of four 1-D signals with no apparent correlation between the input and output dimensions. The best weak learner achieved an MAE of 596 AE 769 (meanAEstandard deviation) for a step size of s ¼ 1000 on the test set. As the possible precision was limited by the sampling resolution of the underlying training data, results near 1=2 of the step size s indicated good performance. Detailed results are given in Table S1. The entire network, including the metamodel, was tested in situ only. In situ results The most crucial aspect of the method is not merely its training performance, but also its applicability. Thus, the performance of our method is evaluated in laboratory experiments, and a selection of graphical results is given in Table 3 for our most promising methods (single model and MLP-based) and different substances (H 2 O, ethanol and isopropanol). Interpretation of different model types. The results seem to indicate that our method works in practise. Even weak learners achieved an SR of 93% and a large improvement (mean of þ435%) on the spectral quality for water. However, the variance in criterion improvement and error remained high. Here, the ensemble method with an MLP-based meta-model achieved more robust but conservative predictions. Overall, a single model and the MLP-based ensemble yield comparable results for all measured metrics. Absent improvement by averaging the top 50 untuned models confirmed the need to train a meta-model. Furthermore, the advantage of a two-layer MLP with non-linear connections of the secondto-last features is shown over a simple non-linear combination of the weak learner's last layer. We also fine-tuned a single model to the transfer dataset T , yielding worse in situ results. Overall, a transfer set seems to be unnecessary. The robustness problem of a single model compared to the MLP-based ensemble is visualized in Fig. 5 based on the experiments that yield Table 2 for water. The single model can often achieve narrower linewidths but shows higher variance. Interpretation of generalizability. Through experiments on ethanol with v ¼ 0:1; 0:5 ½ and isopropanol with v ¼ 0:5, we strive to show the generalization of the method to samples other than H 2 O. Indeed, the training sample in both datasets D and T is water, of course, with only a single peak. Surprisingly, we achieve success rates above 91% for samples with more than one peak, despite the risk of confusion among the peaks. Hardware requirements. The most resource-intense MLP-based ensemble model required 190 ms on average for prediction, using an Intel Core i5-8500 and 200 MB of RAM. The required disc space was between 0:5 and 2:5MB for each weak learner, and 200 KB for the meta-model. Compared to acquisition times for the NMR measurement, and storage space available on recent computers, these were negligible requirements. One complete cycle of DRE, including spectra acquisition, takes 31s without special time-saving efforts being made. Comparison We compare our automated shimming method to Magritek's built-in implementation of the downhill simplex method, which in turn is based on the algorithm in [50]. We expect faster convergence of the simplex method when using the improvements as proposed by [12]. Nevertheless, Yao et al. state that their method behaves similarly to the simplex method when fewer shim coils are used, so we restricted our considerations to the Magritek implementation of the regular downhill simplex method. This implementation requires at least n þ 1 measurements for n shims to initialize its simplex structure, and one to four (average of two) function evaluations per iteration [11]. Our method consistently needs four spectra for one iteration, and one spectrum to check the results. We used a single model and DRE with MLP as the meta-model for comparison. Although the downhill simplex is known to have little influence on the initial simplex' size and shape [11], and tends to produce rapid drops of initial values [59], we were able to accelerate the shimming process. We compared results obtained from the standard Nelder-Mead method with i iterations and step size s ¼ 1000, versus the results for simplex with i À 4 iterations while Table 2 In situ results of automated shimming of the DRE method using a single model and different ensemble types over different substances, with molar fraction v ð Þ. Values are reported as mean AE standard deviation over 100 random distortions drawn from a uniform distribution. The best values are marked in bold. Abbreviations: FC = fully-connected, MLP = multi-layer perceptron, T = transfer database, SR = success rate, DiR = direction ratio, MAE = mean absolute error, c = criterion. Single model Ensemble Untuned initialized with our method (see Table 4). We reduce iterations by four because one iteration of DRE needs four spectra for its prediction. Furthermore, we compared the number of function evaluations necessary for the simplex to reach a criterion equivalent to the DRE prediction, as shown in Table 5. The procedure is as follows: First, our method ''deep regression with ensembles" (DRE) predicts shim settings for a random distortion drawn from a uniform distribution. Then, the downhill simplex method is started from the same distortion with a step size of s ¼ 1000 (as used for the database in Section 3). The algorithm is stopped when it reaches a linewidth equivalent to the one achieved with DRE prediction, and the number of acquisitions (function evaluations) are reported. If the simplex is not able to find an equivalent linewidth within 50 iterations, it is stopped. The results in Tables 4 and 5 demonstrate that, using our method as stand-alone or in combination with regular shimming methods will provide an advantage in either the number of necessary acquisitions, or the achieved spectral quality. Note that deep regression (DR) apparently yields similar performance to deep regression with ensembles (DRE) in quality improvement, but Table 3 Exemplary selected results of shimmed spectra for water, ethanol (v ¼ 0:5), and isopropanol. A single model's performance is compared to ensembles with an MLP-based metamodel. Additionally, the optimal spectrum by applying the simplex method only to the first-order shims is reported. Abbreviations: EtOH = ethanol, i-PrOH = isopropanol. ensembles demonstrate advantage in the number of necessary NMR acquisitions as compared to the simplex method. Discussion Discovering new methods for fast and efficient NMR shim coil calibration remains a challenge, but we see great potential in a fusion of traditional and modern algorithmic methods to achieve both faster and more precise shimming. Assumptions of our approach. Currently, our approach mimics a scenario where only the first-order shims are available to shim a probe from scratch. Thus, the best achievable FWHM is of the order of tens of Hz, but our method shows that very broad initial lineshapes can be improved, very nearly reaching the optimum. Furthermore, using linear shims is the first step for DL to advance shimming for more complex and non-standard cases (e.g. highthroughput parallel spectroscopy where some samples may be located off-center with respect to the shim system, shimming of micro coils). Although first-order shims generally have a prominent influence on reducing inhomogeneity, dealing with higherorder shims should be considered in future work. We are also convinced that localizable information, e.g. about axial magnetization [17], with special hardware additions could help the DL method and allow to reduce the number of required input spectra. Our method contrasts with traditional shimming approaches, in that it is not iterative, i.e., the result emerges after a single step. Therefore, we cannot guarantee that iterating DR or DRE will converge to better results under other circumstances. Deep learning considerations. Also, it is unclear how or whether supervised learning can be scaled for shimming of higher-order shims, especially w.r.t. exponentially increasing requirements in the data acquisition task. One approach to counteract this limitation is to employ a sufficiently realistic digital twin (simulation) of the shimming problem to generate synthetic data. In any case, this study has not yet unlocked the full potential of deep learning, as applied to the shimming problem in NMR. Especially, recent advances in explainable artificial intelligence (XAI) [60] can help to understand decisions made by neural networks, or new neural architectures such as transformers [61,62] are creat-ing a big fuss and yield excellent performance in various tasks. However, to stay grounded in the complexity of our method, we expect higher stability of a single model by using benchmark architectures such as ResNet [63], and increased efficiency by realizing other deep learning techniques [64]. Despite the limitation described above, we could prove that the concept of using DL for NMR shimming is a promising approach to accelerate the entire shimming process. In general, we also wish to draw attention to the lack of published code for related shimming methods, and the difficulty of comparing published results obtained on diverse spectrometers and hardware setups. Conclusion In this paper, we have conclusively shown that deep learning can be used for fast, first-order shimming in a low-field NMR setup, i.e., DL can tackle the ambiguity and lineshape problems inherited by shimming. With our dataset for shimming, we furthermore established a basis for continued research in this area. The applicability of DL was demonstrated both offline and in situ: First, by training neural networks to simultaneously predict three linearfield shim currents necessary to partially cancel an arbitrary distortion. And second, the deployment on a spectrometer of a metamodel based on an ensemble of weak learners revealed that our non-iterative method generally points towards the solution and shows a high success rate in improving spectral quality. Despite the models being trained entirely on water, we experienced generalizability to more complex samples with more than one spectral peak. Financial disclosure J.G.K. declares financial interest in the startup company Voxalytic GmbH, which develops and sells NMR equipment. The other authors declare no interest. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments We thank Craig Eccles of Magritek for his great support. Plots in Fig. 3, Table 3 and Fig. 5 where made with the Python library Scien-cePlots [65], and we sincerely thank John Garrett for making the code available online. We acknowledge support by the KIT-Publication Fund of the Karlsruhe Institute of Technology. Appendix A. Supplementary material The following supporting information is available as part of the online article: Supplementary at the end of this document, data- Table 4 Comparison of criterion improvement w.r.t. initial spectrum between the default downhill simplex and simplex initialized with our method. Default simplex is run for i iterations and DR/DRE + simplex for i À 4 iterations because one iteration of DR/DRE requires four measurements.
9,552.2
2022-02-09T00:00:00.000
[ "Physics", "Engineering", "Computer Science" ]
Band Gaps and Single Scattering of Phononic Crystal * A method is introduced to study the transmission and scattering properties of acoustic waves in two-dimensional phononic band gap (PBG) materials. First, it is used to calculate the transmission coefficients of PBG samples. Second, the transmitted power is calculated based on the far field approach. We have also calculated the scattering cross section, the results indicate that phononic band gap appear in frequency regions between two well separated resonance states. Introduction The acoustic properties of a locally homogeneous and isotropic composite material is characterized by a set of parameters varying in space: mass densityρ, Lamé coefficients λ, and μ.In this paper we focus on the composite materials, which consist of homogeneous particles distributed periodically in a host medium.They are characterized by different mass densities and Lamé coefficients.When identical particles are distributed periodically in a host medium, the composite material may be referred to as a phononic crystal.Recently the propagation of elastic or acoustic waves (EL or AC wave) in a phononic crystal has received much renewed attention [1][2][3][4][5][6][7][8][9].These new materials can be of real interest since a large contrast between the elastic parameters is allowed.For example, systems composed of very soft rubber [10] are more likely to obtain the low-frequency gaps with a structure of small dimension.This can lead to promising applications such as a low-frequency vibration/noise devices such as lenses and acoustic interferometers [11].On the other hand, more sophisticated combinations such as fluids infiltrated in a drilled solid [5] or solid-solid systems [7] have been demonstrated to produce a full phononic band gap for ultrasounds.Phononic crystal make possibility of the achievement of complete frequency band gaps that are useful to prohibit specific vibrations in accurate technologies such as transducers and sonar. In the plane-wave expansion method, the finite differ-ence time domain and the multiple-scattering theory are commonly used in order to study the elastic response of phononic crystal [12][13][14][15].In this work, in order to study the propagation of acoustic waves in phononic crystal, we consider a two-dimensional periodic system consisting of finite cylinders of circular cross section.The system is periodic in the x-y plane and within it there is a translational invariance in the direction (z) parallel to the cylinders.The intersection of the cylinders with a transverse plane makes a square lattice.We treated finite PBG samples as scattering objects in open geometry, The radiation boundary condition was naturally imposed.Considering the far-field approach, we have independently adopted this method to study the transmission and scattering properties of finite PBG samples.In the case of transmission, a generalized transmission coefficient can be defined in terms of the far-field total scattering amplitude, from the total scattering amplitude we can retrieve the dispersion relations and the decay length inside a gap.By adopted this method, the incident field, scattered field and the total scattering amplitude become very simple form, the calculating can be extremely simplified.We explicitly demonstrate that this method can produce transmission results that are in excellent quantitative agreements with the available experimental data. Model and Formula The displacement vector   ,t U r in a homogeneous elastic medium of mass density ρ and Lamé coefficients λ, μ satisfies the following equation: In the case of a harmonic elastic wave with angular frequency ω, we have and Equation (1) was reduced to the following time-independent form Defining    u l m n, (4) where where is the unit vector along the z-axis.Z  , χ and ψ are the displacement potential functions of longitudinal and two transverse waves respectively.The displacement potential function of the incident longitudinal waves can be expanded in terms of the cylindrical Bessel Function [16]  where is the radial component of the incident wave vector, J n is the Bessel function of the first kind of order n, k z is the z-axis component of the incident wave vector; k l is the longitudinal wave numbers, r is the normal distance of the field spot away from z-axis; and θ is the angle of direction. The displacement potential functions of the longitudinal and transverse scattered waves can also be expanded: where H n is the Hankel function.Using the same method we can expand the displacement potential functions of the incident transverse waves in terms of the cylindrical Bessel functions.Therefore the displacement potential functions of the incident transverse waves inside the cylinders are expanded as: where In the following we consider a sample of the two-dimensional periodic arrays system.The sample was made of d-radius rods with lattice constant a.The position of the rod with index j corresponds to . What are around this rod are incident waves involving external sources and scattered waves from other rods.The total field around this rod is  are defined depending on the boundary conditions. In the light of the continuity of the displacements, there are Due to the continuity of the stresses, there exists: where   , : , , ,,: where σ ij are the stress tensor elements and u ij are the strain tensor elements that result from the components of the displacement vector.The superscripts inc, sc, in, denote the incident, the scattered and the inner field respectively. In the far field, when , The total scattering amplitude of the longitudinal waves from Equations ( 9)-( 11) is For acoustic wave transmission, a slit with width w along the y direction is put between a source and the sample.Acoustic waves propagate along x direction.In this case, the incident field can be obtained from the Kirchoff integral formula [17]: where   2 sin e c o s 2 2π sin The vector of energy flux density is: Therefore the far-field energy flux has the form So we define a transmission coefficient as the ratio of transmission energy flux to that of the incident wave at 0   . Therefore: According to Equation ( 17) and the definition of scattering cross section, the dimensionless scattering cross sections of the longitudinal and transverse scattered waves have the form: where ˆl  and ˆt  are the scattering cross sections for longitudinal and transverse incident wave respectively, k l and k t are the longitudinal and transverse wave numbers for the host, c l1 and c t1 are the longitudinal and transverse wave velocities for the host, d is the diameter of cylinder, a n and b n are the longitudinal and transverse scattered wave coefficients.For elastic media, there is a reasonable amount of calculations for infinite systems [18][19][20].However, systems are finite and there are boundaries.Therefore under a proper choice of parameters, states sliding and propagating along the surface and localized in the normal to the surface, i.e., should appear, these are analogous to electronic surface in crystals [21] and to those calculated for photonic systems [22].According to M. Torres et al. that surface state solutions are consubstantial with finite systems and exist for sonic propagation in finite elastic media.They deal with several realizations of structures for ultrasonic propagation in elastic media to observe such surface state modes and localization phenomena in linear and point defects [3]. Numerical Results In this paper, the finite-sized PBG sample used in the calculation consisted of 6 rows along the x axis, 36 column rods with steel rods arranged in air host as square with the lattice constant a, with filling fraction of f = 0.55, rod radius mains = 0.35a, and a room of temperature 25˚C.The mass density of phononic crystal is ρ = 7800 g/cm 3 , longitudinal wave velocity c l = 5940 m/s, transverse wave velocity c t = 3220 m/s [23], width of slit w = 3.5a, and is placed at a distance of l = 2.1a.From Equation (22) we have calculated the transmission coefficient and total transmitted power as . Figure 1 shows the calculated results of transmission coefficient and total transmitted power, in dimensionless frequency region (1.75 to 2.25).Acoustic wave propagation is inhibited forming frequency band gaps.The transmission coefficient curve is finite and total transmitted power becomes zero.The results are in excellent agreement with previous results from Ref. [1]. Taking T ≈ 0 in Equation ( 22) which gives large variations in frequency.This is related to the phase shift of the scattered waves, if we assuming , where Φ is the phase difference between outgoing and incoming waves.Φ changes rapidly near band edges.The derivative of Φ(f) gives information on group velocity v g .At band edge, dΦ/df diverges and v g approaches zero.Therefore from f s (0) we are able to extract the effective elastic constant for frequencies inside a band and the decay length for frequencies inside a gap.From Equation (23) and Equation ( 24) we have calculated the dimensionless scattering cross sections.Here, we try to connect the appearance of a gap and other characteristics of the band structure in a periodic system consisting of cylinder inclusions in a homogeneous matrix with the form of the cross section from a single inclusion.This connection determines to what extent single scattering is an important factor in determining some characteristic features in the band structure, and how it can be used to predict the possible existence of gaps.For cylinder inclusions in a host material, the existence of full gaps has been connected to the following picture: There are two channels for propagation.One is mainly using the host material and the other is employing the resonance states.Coherent jumping from resonance state creates this second channel.In analogy, with the linear combination of atomic orbital (LCAO, otherwise called tight binding approximation) in the electronic band structure. In attempting this extension of the LCAO approach to AC, one should keep in mind some important differences between the two cases.Resonances are not true eigenstates, rigorously localized inside and around each scattering as the atomic-like orbital.On the other hand, because ω 2 corresponds to the case where the electronic energy is higher than the maximum of the potential, there is an additional the host material.It means that resonant states for AC are states embedded in the continuum.This is an aspect of the problem not encountered in the electronic case. Conclusions In this work, we have investigated theoretically the propagation of acoustic waves in a binary 2D phononic crystal constituted of a square array of parallel, circular, steel cylinder in air resin matrix.We have limited the wave propagation to the plane perpendicular to the cylinders.The numerical calculations prove unambiguously the existence of absolute stop band independent of the direction of propagation of the acoustic waves.Besides the band gaps, one can establish some qualitative and even semiquantitative correspondences between the experimental and theoretical transmission spectra inside the pass bands.However, a more quantitative comparison would need to repeat such experiments with other samples (for instance to check the possibility of defects during the sample preparation, different thicknesses of the samples, etc.), in this respect, an analysis of the eigenvectors associated with the different modes would be also helpful for an understanding of the details of the experimental transmission spectra.We extended the far field approach and presented transmissive and scattering properties of acoustic waves in finite-sized phononic band gap (PBG) material.This method make the calculating can be extremely simplified.We found that full band gap is created between well separated resonance states in which one can't achieve coherent jumping from a resonance state to a neighboring resonance states in analogy with the linear combination of atomic orbits in the electronic band structure; On the other hand, such that the propagation along the host material is inhibited.This results in full band gap appearing. Figure 1 . Figure 1.Acoustic band structure and transmission coefficients and total transmitted power for a square array of rigid stainless steel cylinders in air host.The filling fraction is f = 0.55.(a) The band structures reproduced from Ref. [1]; (b) Solid curves: transmission coefficients.Dashed curves: total transmitted power. Figure 2 shows the calculated results of dimensionless scattering cross sections, The gap appears in 1.9 ≤ k l d(k t d) ≤ 2.8.The arrows denote the position of band gap, Fig- ure 2(c) shows one full band gap.These results agree with Figure 1. Figure 2 . Figure 2. Dimensionless scattering cross sections for steel (a), rigid (b) cylinder embedded in air host.Panel (c): results by subtracting the amplitudes of (a) and (b).Solid curves: dimensionless scattering cross sections for longitudinal wave.Dashed curves: dimensionless scattering cross sections for transverse wave.
2,865.8
2011-12-28T00:00:00.000
[ "Materials Science", "Physics" ]
Navigating the Stay-at-Home Order with Benedictine Stability In this article, I argue that Benedictine stability might provide a rational modulation for some people to not only cope with but also flourish during the pandemic vis-à-vis the stay-at-home (SHO) order. I will not argue that those who obey the SHO are more rational than those who don’t or vice versa. Instead, I will argue that those who end up following the SHO, whether voluntarily or involuntarily, can rationalize following the SHO by learning from the Benedictine vow of stability. First, stability in a physical space reimagined as a kind of retreat from society might be beneficial for rejuvenating oneself and pursuing what one values. Second, stability negatively discourages people from escaping a difficult reality and positively encourages them to overcome challenges in the institutions in which they belong. Third, stability can be seen as a necessary context for the betterment of character. There have been variations of the SHO in different places with respect to what people can or cannot do, but the basic idea is that people should stay in their domiciles except when they have to make trips for essential goods, such as food and medicine, or to work in businesses deemed essential, such as grocery stores, hospitals, and public transportation.⁵ Surely when emergency situations occur, such as wildfires or hurricanes, evacuations are permissible and might even be mandatory. ⁶ The SHO has been met with various reactions by people around the world, from commitment to resistance and from trust to disbelief.⁷ Many people have complied with the order, thinking that the SHO would help slow down the escalation of the pandemic. Others were skeptical. One study found that those who believe to some degree that the "Coronavirus is a bioweapon developed by China to destroy the West" or that "Jews have created the virus to collapse the economy for financial gain" would be less likely to follow the SHO.⁸ The resistance to the SHO is fueled by the multidimensional challenges it presents to lives at different levels. Individuals, businesses, and countries have suffered the economic impacts from the SHO, with the loss of income and the unemployment rate sky-rocketing.⁹ The psychological costs of the SHO have also been acknowledged by various studies. One study in the United States examining the period between March 27, 2020 and April 5, 2020 has shown that perceived financial worry, depression, anxiety, and loneliness have increased due to the SHO.¹⁰ Relationships are struggling as well, as indicated by the reported increase of intimate partner violence, domestic abuse, stress, and divorce rate.¹¹ In January 2021, the World Health Organization warned of short-and long-term mental health problems for many people, including depression and substance abuse disorders.¹² Depending on one's views about the trustworthiness of the media, the reliability of the governments, etc., if rationality is understood as means-end rationality (Zweckrationalität),¹³ both resisting and complying with the SHO can be rational in different respects. Those who think that the SHO is based on flawed data or on a hidden political agenda would reject it because it is deemed unnecessary and even harmful. Those who think that the SHO is completely justified would obey it because it is deemed expedient for the well-being of individuals and society. My aim in this article is not to argue that those who comply with the SHO are more rational than those who don't or vice versa. Rather, I suggest that there is a more rational way to execute the SHO for people who do end up staying at home, either voluntarily or involuntarily. My proposal is that one can rationalize following the SHO by learning from the Benedictine vow of stability. This strategy is an example of how religion might provide a rational way in coping with and even flourishing during the coronavirus pandemic. More precisely, I argue for three things. First, stability in a physical space reimagined as a kind of retreat from society might be beneficial for rejuvenation and the pursuit of what one values. Second, stability might negatively discourage people from escaping a difficult reality during the pandemic and might positively encourage them to overcome challenges in the social institutions where they belong, such as the institutions of work, family, and religion. Third, stability can be seen as a necessary context for character development.¹⁴ 13 Means-end rationality or instrumental rationality may be expressed in the following form: "If I desire X, I should do Y (because Y would be the rational thing to do if I want to get X)." 14 I should clarify at the outset that this rationalization strategy does not readily presuppose any particular normative theory, such as utilitarianism, Kantian deontology, or virtue ethics. One might be able to use the utilitarian or deontological 2 Benedictine stability In the Regula Benedicti (RB), St. Benedict (480-543 AD) provides a manual for monks who, for most of their lives, "stay at home" in their monastery.¹⁵ There is an understanding that every Benedictine monastery "is, and ought to be, a home."¹⁶ The monks can sometimes make trips that are deemed necessary for their spiritual advancement, such as to visit holy shrines and holy people.¹⁷ In general, however, "staying put" in a monastic enclosure is the rule for monks, where they engage in worship, study, discipline, and work. What may be counterintuitive for many people today is that monks voluntarily choose not only to be a part of a community but also to stay at home in the spatial sense for their entire lives. They believe that staying in a monastery can be an occasion for spiritual growth. Viewed from this angle, it isn't surprising that St. Benedict lists the following three vows that a novice should take when entering a Benedictine monastery: the vows of stability, conversion of life, and obedience: Suscipiendus autem in oratorio coram omnibus promittat de stabilitate sua et conversatione morum suorum et oboedientia. [The one to be received, however, must first promise his stability, fidelity to the monastic lifestyle and obedience before all in the oratory.] (58.17)¹⁸ The concept of Benedictine stability implies constancy and perseverance in physically staying put in a certain monastery.¹⁹ Let us take a look at this idea. From the civitas to the cloister A Benedictine monk is called to leave society and embrace a life in the physical monastic cloister.²⁰ Although there is a sense in which the cloister is a part of the civitas, I am using the term civitas to refer to the active life (vita activa) in society as opposed to the contemplative monastic life (vita contemplativa). Please note, however, that this distinction between the active and contemplative life, although useful to highlight certain practices or emphases, is ultimately superficial because contemplation is a kind of action, and action can be a form of contemplation. A person who has decided to become a Benedictine monk would face challenges in living in a confined physical space possibly for the rest of his life. A monastic life is not luxurious and there are tedious rules to follow that demand discipline and sacrifice. St. Benedict thus warns people considering the monastery that there are hardships in store. After a period of two months, if an inquirer still perseveres in stability (de stabilitate) (RB 58.7-9), he can continue to the next step of discernment. The aspiring monks see the value and the rationality of voluntarily moving into such enclosed space because they usually think such space allows them to seek God and obey God's commands (RB 58.7-9). The SHO differs from the vow of stability in that people do not always comply with the SHO voluntarily. Instead, those who comply with the SHO might do so out of necessity for health reasons. But we can see that  perspectives in discussing Benedictine stability and the SHO. Nevertheless, given the emphasis on virtues in Benedictine spirituality (e.g., patience, obedience, and discretion), there is a prima facie reason to think that virtue ethics would be a natural ethical framework to utilize. The architecture of the monastery itself reflects an aspiration to stability with the interconnectedness of all the building functions. See Irvine, "The Architecture of Stability." 20 To see the debate between two interpretations of stability, either it should be understood as stability of place (stabilitas loci) or stability of the monastic vocation (stabilitas status), see Monson, "Status or Loci?" My own understanding is that stability in RB should be understood as pertaining to both place and status. the movement of a monastic candidate from the civitas to the cloister is similar to the move many people make during the pandemic from the active life in society to their secluded homes for months with no end in sight yet. Again, this is not to discount the fact that even during the lockdown, people are often busier working from home. As I mentioned before, there is a means-end rationality that one can provide to ground this move during the pandemic, which is somewhat similar to the Kantian hypothetical imperative: "If I want to minimize the risk of contracting the coronavirus, then I will comply with the SHO as best as I can." This kind of rationality might get people to comply with the SHO, but the reason to do so can still be fortified to help them stay at home for a prolonged period of time. For those following the SHO, the Benedictine vow of stability might provide a way for them to find more substantive reasons why the SHO can be valuable. This strategy is a shameless call to rationalize the SHO. For instance, the SHO might be seen as a chance to reevaluate one's life, take stock of one's resources, and plan for what may be coming next. And although some have become busier at home due to added responsibilities of taking care of family members and working at the same time, the SHO may have made available times that were previously used for commuting or unnecessary travel. Like the Benedictine monks in their enclosure, people who follow the SHO may engage more in prayer, work that is more accountable and productive, the cultivation of discipline and virtues, and the building and rebuilding of relationships. In short, one may find rationality in the SHO because it may help the person pursue what is deemed valuable, similar to how monks are able to find withdrawing from society to be beneficial. From the cloister to the community The vow of stability or the promise to stay in a certain monastery, nevertheless, should be understood not only as a promise to remain in a particular physical monastic enclosure, but also as a vow to stay in a congregation or community, which should be distinguished from the physical monastery building.²¹ When a monastic candidate enters a monastery, he becomes a brother to the other monks and the abbot becomes his father. A monastery is an idealized family where discipline is upheld and love is practiced. The community or the congregation of monks must be a functioning institution for the monastery to flourish. Disillusionment is not an excuse for escapism, but an invitation to finding resolutions for the whole community. As Michael Casey writes, "Stability prevents us from running away from necessary development."²² Whatever challenges and conflicts are happening in the monastery, monks have to communicate, negotiate, and resolve those conflicts in order to live together. Of course, quitting from the monastery is a possibility, but doing so will not cease all problems. There are challenges in every institution. The grass is not always greener on the other side, as it were. By contrast, the vow of stability invites monks to be faithful to the community for the rest of their lives. This is the second rationale that one can learn from the Benedictine way of life. Whether a marital condition is exacerbated due to the SHO during the pandemic or whether family members have come to feel more suffocating during the lockdown, Benedictine stability invites people to stay faithful to their community and work things out. In this way, the vow of stability is not only a move from the civitas to the cloister, but also from simply living in the cloister to doing so in a way that manifests fidelity to the community. Whether it is for the institution of marriage, family, work, or religion, people need to "make it work" through communication, therapy, and conflict resolutions. In normal circumstances (e.g., where there is no extreme domestic abuse), such a vow of stability might help people keep their communities intact during the pandemic. From the community to character We've seen that the Benedictines first move from society to the cloister, and then from the cloister to the community. However, they need to progress further to attaining stabilitas (which is a character trait) in the community. One can become a monk, vowing to be faithful to stay in a physical enclosure and to be a part of a community, but if there is no stability in his mind and character, the vow can't be kept to the fullest. A monk can't be a good monk if he is constantly anxious and if his thoughts and emotions are occupied by the pleasures and fantasies of the world beyond the monastery walls. I should like to make it very clear now that the movements from the civitas to the community and to character are not always temporally or logically prior to one another, but are instead intertwined with and reinforced by each other. What would instead be the desired characteristics of the monks in the monastery? In RB 4, St. Benedict provides a list of the tools of the good works, which consists of exhortations and commandments for the monastic community. The list includes more serious prescriptions, such as the Decalogue, ethical commands, and a call to character reform. It also addresses practical issues, such as wine-drinking and making jokes. The Benedictine monks are then characterized as virtuous people who utilize these tools of good works in a community and for the community. What interests us in this article is the context in which these instruments of good works are utilized: Officina vero ubi haec omnia diligenter operemur claustra sunt monasterii et stabilitas in congregatione. [The workshop where we should work hard at all these things is the monastic enclosure and stability in community.] (RB 4.78) More than simply being an attitude or a trait, stability in a community is also understood by St. Benedict as a "workshop" where the instruments of good works can be honed. Without stability as the backdrop for this to happen, the monks would not be able to grow in grace and virtue. Accordingly, the life of the Benedictines can't simply be a life confined to a physical space or to a community; rather, it should also be a sui generis monastic community life that is committed to the practice of stability itself. Living in a monastic space without embracing the monastic life would be inauthentic. Embracing a monastic life without living in a monastery is tempting oneself to one's downfall. It is this pairing of monastic physical space and monastic lifestyle that makes things work in a Benedictine monastery.²³ In turn, the commitment and the practice of stability would enable the monks to fulfill the other Benedictine vows, especially the vow of conversatio morum (fidelity to monastic life). This vow requires that monks constantly make daily progress both in their inner character and in their outer lifestyle as monastics. St. Benedict says in Prologue 49, "As we progress in the monastic life and in faith, our hearts will swell with the unspeakable sweetness of love, enabling us to race along the way of God's commandments." Stability serves as the necessary context for such change of character to happen. Notice that stability is not identical to stagnation. On the contrary, stability encourages moral and spiritual betterment while recognizing the great benefits of staying faithfully in one community. For example, the monks would be able to learn and exercise the virtues of perseverance, love, and gratitude.²⁴ In following the SHO, people have struggled with loneliness, anxiety, anger, and boredom. These are the things that Benedictine monks also struggle with. The monks have to fight against acedia (the noonday demon), feelings of isolation, and a lost sense of self. The monks must address these issues at home and in their community with stabilitas as the necessary context for spiritual development and self-discovery by finding their calling and true selves.²⁵ People who follow the SHO can do the same. When people must stay at home during the pandemic, they can use their fidelity to staying in their physical homes and communities (e.g., family, work, religious institution) as a workshop to become more virtuous. This might require a closer look at one's own thoughts and emotions to arrive at an honest self-awareness. People can ask what they should do to improve their own character as well as what they could do for other people. With respect to care for others, the phenomenon of "caremongering" in which quarantined people during the coronavirus pandemic start caring for each other is an example of how people can display social solidarity even during statewide isolation.²⁶ In other words, self-cultivation does not exclude otherregarding concerns. Not only the SHO might help to curb the spread of the infection, hence respecting the well-being of others, but it might also become an opportunity to develop character that could be valuable for the sake of others. This is not unlike the cloistered monks who do good things for the world by praying, brewing beer and making wines, running farms, and employing people from the surrounding community. St. Benedict contrasts the cenobitic monks such as the Benedictines with a type of monks called the gyrovagues (RB 1.10-12), who move around from one monastery to another, hence lacking an opportunity to live in the same community for a prolonged period. The constant motion of the gyrovagues makes it impossible for them to form meaningful friendships, to contribute to the community in a significant way, and to work through conflicts and personal struggles. The Benedictines, by contrast, take the vow of stability, which gives them the opportunity to strengthen their communities and personal lives. Esther de Waal nicely sums up this idea of the importance of embracing stability in one's cell and community for the cultivation of character: "The stability of place and of relationships are all the means towards the establishment of stability of the heart."²⁷ At this point, one can say that navigating the SHO with Benedictine stability has a long-term benefit, as it is a way to attain stability of character, which is a long-lasting state in a person's life. First, given what we know about the recurrence of pandemics in the history of the world, we can reasonably say that the coronavirus pandemic might not be the last pandemic that will ever happen.²⁸ Other than new pandemics, there might be other occasions in which a person might have to follow a form of the SHO again, such as wars and natural disasters. A person with stability will be more prepared to face these unpredictable challenges. Second, there are people who are or might become home-bound regardless of the pandemic, due to work conditions, health issues, and family situations. The practice of stability and stable character will help these people flourish during their continued stay at home. Third, the practice of stability in following the SHO would invite people to think about the centrality of home once again. During the coronavirus pandemic, home has become a significant human institution that allows other social institutions such as work, education, and religion to keep functioning. The practice of staying in might help people once again to see home as a place where one can find an opportunity to grow through crucial conversations, individual and family devotions, daily meals, work, and rest. Challenges It goes without saying that the challenges to adopting Benedictine stability for people who follow the SHO are colossal. There is already a group of people who are called the Benedictine oblates who promise to follow the Rule of St. Benedict as much as possible in their daily lives in the secular world.²⁹ These people are not Benedictine monks but rather people from all walks of life (both lay people and clergy) who are affiliated with a Benedictine monastery. Even during non-pandemic times, Benedictine oblates face difficulties in transferring ritual practices from the monastery into their own contexts.³⁰ Oblates in the world understandably have more difficulties setting aside regular times to pray and work. The following challenges, then, need to be acknowledged in this discourse on Benedictine stability during the coronavirus pandemic. First, in contrast to a monastery where there is a controlled schedule, homes during the pandemic can be a chaotic and an overwhelming environment. For those who are still single and living alone, their homes can become a makeshift hermitage during the lockdown. People may still question whether Benedictine stability is merely an out-of-reach luxury for some people who sometimes have to juggle between childcare, housekeeping, and two part-time jobs.³¹ This is surely a genuine concern, but the original idea of Benedictine stability comes from the Greek ὑπομονή (hupomonē), which means patient endurance. This virtue, which is displayed in martyrdom, anticipates even the most challenging situations, where one's fidelity and commitment are heavily tested.³² Benedictine stability is not a naive idea about the world but rather a conviction that in a fleeting and chaotic world, one can still maintain at least a stable character and a sense of purpose.³³ Second, there is the question of how precisely one can practice Benedictine stability in a very overwhelming context. The most important yet perhaps most difficult thing to do is to resist the temptation to escape or avoid difficulties in the situation one is facing. As Casey says, stability protects the process of purgation when we are experiencing pain in a particular context: "We are tied down under the surgeon's knife."³⁴ In terms of what practical things what a person can or should do, one can do as much or as little one can manage in a particular situation. For some, the SHO actually opens up time to pick up new hobbies, such as stargazing,³⁵ and to set regular times for prayers. For others, life can be very hectic and frustrating with constant sleep deprivation. Even then, one can still take a few deep breaths and enter into moments of prayer. Third, there is the question as to whether non-religious people would be able to adopt Benedictine stability. It would seem that they might find themselves undergoing a completely novel experience because they do not embrace the entire Christian metaphysical worldview that underlies the Benedictine tradition. To clarify the challenge, let's look at the practice of mindfulness that has been recently gaining interest from non-religious people. Louchakova-Schwartz has shown that the practice of mindfulness among non-religious people is different from the original Buddhist practice of mindfulness because the former is not concerned with eschatology, enlightenment, or other canonical Buddhist psychology.³⁶ Without these metaphysical frameworks, the non-religious practice of mindfulness seems to be more focused on the structure of consciousness itself, which consists of a deep awareness of self and an intersubjective perception. Similarly, non-religious people, not subscribing to the religious ideation that one's stability is a function of both human work and divine grace, may be unable to adopt Benedictine stability in its entirety, but they might be able to use some of its features. For example, they can focus more on the interplays between change and changelessness, the necessary and the unnecessary, and hope and despair. This experience of non-religious people, which seems to be Benedictine-inspired, is an uncharted territory that is worth exploring. Fourth, there are abusive contexts that can worsen during the pandemic. Benedictine stability doesn't suggest that one should endure verbal or physical abuse without protesting or seeking help or refuge, just as Benedictine monks are not called to be victimized and abused in a monastery. Benedictine stability is based on the belief that staying put can be a context for progress for those who are able to use the occasion for that purpose. When the context is categorically toxic and unsalvageable, abandoning ship is always available as a last resort. St. Bernard, the founder of the abbey of Clairvaux, thus advises monks to move to a different monastery if their current monastery is too relaxed or corrupt.³⁷ Conclusion The vow of stability provides a rational way for monks to stay in their physical monastery despite the challenges they face being confined to the same space for a prolonged period of time. First, they move from the world to an enclosed physical space, which is the monastery. Second, they move from simply being in a physical space to being a part of a community or congregation. Lastly, they move from simply being a part of a community to manifesting stability in their community, which involves stability in not only the spatial sense, but also in their character. People who follow the SHO can learn from the Benedictine vow of stability. First, they can reason that staying at home may bring some benefits similar to those offered by retreats. Second, they can be motivated to address challenges in the institutions where they belong without resorting to escapism. Third, they can use the stability in their social institutions, such as marriage and work, as a context to better their character. Although religious practices can surely make the pandemic worse, such as what happened in South Korea with the Shincheonji sect,³⁸ some religious values such as Benedictine stability can thus serve as a rational modulation for people to survive and even thrive during the coronavirus pandemic.
6,011.6
2021-01-01T00:00:00.000
[ "Philosophy" ]
Adaptive Image Processing: First Order PDE Constraint Regularizers and a Bilevel Training Scheme A bilevel training scheme is used to introduce a novel class of regularizers, providing a unified approach to standard regularizers \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$TGV^2$$\end{document}TGV2 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$NsTGV^2$$\end{document}NsTGV2. Optimal parameters and regularizers are identified, and the existence of a solution for any given set of training imaging data is proved by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma $$\end{document}Γ-convergence under a conditional uniform bound on the trace constant of the operators and a finite-null-space condition. Some first examples and numerical results are given. Introduction Image processing aims at the reconstruction of an original "clean" image starting from a "distorted one", namely from a datum which has been deteriorated or corrupted by noise effects or damaged digital transmission. The key idea of variational formulations in image-processing consists in rephrasing this problem as the minimization of an underlying functional of the form where u η is a given corrupted image, Q := (−1/2, 1/2) N is the N -dimensional unit square (in image processing we usually take N = 2, i.e., Q represents the domain of a square image) and R α is a regularizing functional, with α denoting the intensity parameter (which could be a positive scalar or a vector). Minimizing the functional I allows one to reconstruct a "clean" image based on the functional properties of the regularizer R α . Within the context of image denoising, for a fixed regularizer R α we seek to identify u α,R := arg min u − u η 2 L 2 (Q) + R α (u) : u ∈ L 2 (Q) . An example is the ROF model (Rudin et al. 1992), in which the regularizer is taken to be R α (u) := αT V (u), where T V (u) is the total variation of u [see, e.g. Ambrosio et al. (2000, Chapter 4)], α ∈ R + is the tuning parameter, and we have u α,T V := arg min u − u η 2 L 2 (Q) + αT V (u) : u ∈ L 2 (Q) . (1.1) In view of the coercivity of the minimized functional, the natural class of competitors in (1.1) is BV (Q), the space of real-valued functions of bounded variation in Q. The trade-off between the denoising effects of the ROF-functional and its featurepreserving capabilities is encoded by the tuning parameter α ∈ R + . Indeed, high values of α might lead to a strong penalization of the total variation of u, which in turn determines an over-smoothing effect and a resulting loss of information on the internal edges of the reconstructed image, while small values of α cause an unsatisfactory noise removal. In order to determine the optimal α, sayα, in De Los Reyes et al. (2016Reyes et al. ( , 2017) the authors proposed a bilevel training scheme, which was originally introduced in Machine Learning and later adopted by the imaging processing community (see Chen et al. 2013Chen et al. , 2014Domke 2012;Tappen et al. 2007). The bilevel training scheme is a semi-supervised training scheme that optimally adapts itself to the given "clean data". To be precise, let (u η , u c ) be a pair of given images, where u η represents the corrupted version and u c stands for the original version, or the "clean" image. This training scheme searches for the optimal α so that the recovered image u α,T V , obtained in (1.1), minimizes the L 2 -distance from the clean image u c . An implementation of such training scheme, denoted by (T ), equipped with total variation T V is An important observation is that the geometric properties of the regularizer T V play an essential role in the identification of the reconstructed image u α,T V and may lead to a loss of some fine texture in the image. The choice of a given regularizer R α is indeed a crucial step in the formulation of the denoising problem: on the one hand, the structure of the regularizer must be such that the removal of undesired noise effects is guaranteed, and on the other hand the disruption of essential details of the image must be prevented. For these reasons, various choices of regularizers have been proposed in the literature. For example, the second order total generalized variation, T GV 2 α , defined as has been characterized in Bredies et al. (2010), where Du denotes the distributional gradient of u, (sym∇)v := (∇v + ∇ T v)/2, M b (Q; R N ×N ) is the space of bounded Radon measures in Q with values in R N ×N , α 0 and α 1 are positive tuning parameters, and α := (α 0 , α 1 ). A further possible choice for the regularizer is the non-symmetric counterpart of the T GV 2 α -seminorm defined above, namely the N sT GV 2 α functional (see e.g., Valkonen et al. 2013;Valkonen 2017). The different regularizers have been shown to have several perks and drawbacks for image reconstruction. An important question is thus how to identify the regularizer that might provide the best possible image denoising for a given class of corrupted images. To address this problem, it is natural to use a straightforward modification of scheme (T ) by inserting different regularizers inside the training level 2 in (T -L1). For example, one could set Level 1. (R α ) := arg min u α,R − u c 2 L 2 (Q) : R α ∈ αT V , T GV 2 α , N sT GV 2 α , Level 2. u α,R := arg min u − u η 2 L 2 (Q) + R α (u) : u ∈ L 1 (Q) . (1.3) However, the finite number of possible choices for the regularizer within this training scheme would imply that the optimal regularizerR α would simply be determined by performing scheme (T ) finitely many times, at each time with a different regularizer R α . In turn, some possible texture effects for which an "intermediate" (or interpolated) reconstruction between the one provided by, say, T GV 2 α and N sT GV 2 α , might be more accurate, would then be neglected in the optimization procedure. Therefore, one main challenge in the setup of such a training scheme is to give a meaningful interpolation between the regularizers used in (1.3), and also to guarantee that the collection of the corresponding functional spaces exhibits compactness and lower semicontinuity properties. The aim of this paper is threefold. First, we propose a novel class of imageprocessing operators, the PDE-constrained total generalized variation operators, or PGV 2 α,B , defined as where B is a linear differential operator (see Sect. 2 and Definition 3.5) and α := (α 0 , α 1 ), with α 0 , α 1 ∈ (0, +∞). We also define the space of functions with bounded second order PGV 2 α,B -seminorms Note that if B := sym∇, then the operator PGV 2 α,B defined in (1.4) coincides with the operator T GV 2 α mentioned in (1.2). In fact, we will show that, under appropriate assumptions (see Definition 6.1), the class described in (1.4) provides a unified approach to some of the standard regularizers mentioned in (1.3), generalizing the results in Brinkmann et al. (2019) (see Sect. 7.2). Moreover, the collection of functionals described in (1.4) naturally incorporates the recent PDE-based approach to image denoising formulated in Barbu and Marinoschi (2017) via nonconvex optimal control problem, thus offering a very general and abstract framework to simultaneously describe a variety of different image-processing techniques. Adding to the model higher-order regularizations which can be different from the symmetric gradient additionally allows to enhance image reconstruction in one direction more than in the others, thus paving the way for furthering the study of anisotropic noise-reduction. The second main goal of this article is the study of a training scheme optimizing the trade-off between effective reconstruction and fine image-detail preservation. That is, we propose a new bilevel training scheme that simultaneously yields the optimal regularizer PGV 2 α,B (u) in the class described in (1.4) and an optimal tuning parameter α, so that the corresponding reconstructed image u α,B , obtained in Level 2 of the (T 2 θ )scheme (see (T 2 θ -L2) below), minimizes the L 2 -distance from the original clean image u c . To be precise, in Sects. 3, 4, and 5 we study the improved training scheme T 2 θ for θ ∈ (0, 1), defined as follows where is an infinite collection of first order linear differential operators B (see Definitions 3.4, 5.1). We prove the existence of optimal solutions to (T 2 θ -L1) by showing that the functional is continuous in the L 1 topology, in the sense of -convergence, with respect to the parameters α and the operators B (see Theorem 4.2). A simplified statement of our main result (see Theorem 5.4) is the following. The collection of operators B used in (T 2 θ -L1) has to satisfy several natural regularity and ellipticity assumptions, which are fulfilled by B := ∇ and B := sym∇ (see Sect. 7.2.1). The general requirements on B that allow scheme (T 2 θ ) to have a solution are listed on Assumptions 3.2 and 3.3. Later in Sect. 6, as the third main contribution of this article, we provide in Definition 6.1 a collection of operators B satisfying Assumptions 3.2 and 3.3 under some uniform bounds on the behavior of their traces and under finiteness of their null spaces. A simplified statement of our result is the following (see Theorem 6.5 for the detailed formulation). Theorem 1.2 Let B be a first order differential operator such that there exists a differential operator A for which (A , B) is a training operator pair, namely A admits a fundamental solution having suitable regularity assumptions, and the pair (A , B) fulfills a suitable integration-by-parts formula (see Definition 6.1 for the precise conditions). Then B is such that the training scheme (T 2 θ ) admits a solution. The requirements collected in Definition 6.1 and the analysis in Sect. 6 move from the observation that a fundamental property that the admissible operators B must satisfy is to ensure that the set of maps v ∈ L 1 (Q; R N ) such that Bv is a bounded Radon measure (henceforth denoted by BV B (Q; R N )) must embed compactly in L 1 (Q; R N ). In the case in which B coincides with ∇ or sym∇, a crucial ingredient is Kolmogorov-Riesz compactness theorem (see Brezis 2011, Theorem 4.26 and Proposition 6.6). In particular, for B = sym∇ the key point of the proof is to guarantee that bounded sets F ⊂ B D(Q) satisfy . This in turn relies on the formal computation where φ is a fundamental solution for curlcurl, and where δ and δ h denote the Dirac deltas centered in the origin and in h, respectively. In the case in which B = sym∇ the conclusion then follows from the fact that one can perform an "integration by parts" in the right-hand side of the above formula, and estimate the quantity curlcurl ((δ h * φ − φ) * f ) by means of the total variation of (sym∇) f and owing to the regularity of the fundamental solution of curlcurl. The operator A in Theorem 1.2 plays the role of curlcurl in the case in which sym∇ is replaced by a generic operator B. Definition 6.1 is given in such a way as to guarantee that the above formal argument is rigorously justified for a pair of operators (A , B). Finally, in Sect. 7.2 we give some explicit examples to show that our class of regularizers PGV 2 α,B includes the seminorms T GV 2 α and N sT GV 2 α , as well as smooth interpolations between them. We remark that the task of determining not only the optimal tuning parameter but also the optimal regularizer for given training image data (u η , u c ), has been undertaken in Davoli and Liu (2018) where we have introduced one dimensional real order T GV r regularizers, r ∈ [1, +∞), as well as a bilevel training scheme that simultaneously provides the optimal intensity parameters and order of derivation for one-dimensional signals. Our analysis is complemented by the very first numerical simulations of the proposed bilevel training scheme. Although this work focuses mainly on the theoretical analysis of the operators PGV 2 α,B and on showing the existence of optimal results for the training scheme (T 2 θ ), in Sect. 7.3 a primal-dual algorithm for solving (T 2 θ -L2) is discussed, and some preliminary numerical examples, such as image denoising, are provided. With this article we initiate our study of the combination of PDE-constraints and bilevel training schemes in image processing. Future goals will be: • the construction of a finite grid approximation in which the optimal result (α,B) for the training scheme (T 2 θ )can be efficiently determined, with an estimation of the approximation accuracy; • spatially dependent differential operators and multi-layer training schemes. This will allow to specialize the regularization according to the position in the image, providing a more accurate analysis of complex textures and of images alternating areas with finer details with parts having sharpest contours (see also Fonseca and Liu 2017). This paper is organized as follows: in Sect. 2 we collect some notations and preliminary results. In Sect. 3 we analyze the main properties of the PGV 2 α,B -seminorms. The -convergence result and the bilevel training scheme are the subjects of Sects. 4 and 5, respectively. We point out that the results in Sects. 3 and 4 are direct generalizations of the works in Bredies and Valkonen (2011), Bredies and Holler (1993). The novelty of our approach consists in providing a slightly stronger analysis of the behavior of the functionals in (1.5) by showing not only convergence of minimizers under convergence of parameters and regularizers, but exhibiting also a complete -convergence result. The expert Reader might skip Sects. 3-5, and proceed directly with the content of Sect. 6. Section 6 is devoted to the analysis of the space BV B for suitable differential operators B. The numerical implementation of some explicit examples is performed in Sect. 7.3. Notations and Preliminary Results We collect below some notation that will be adopted in connection with differential operators. Let N ∈ N be given, and let Q := (−1/2, 1/2) N be the unit open cube in R N centered in the origin and with sides parallel to the coordinate axes. M N 3 is the space of real tensors of order N × N × N . Also, D (Q, R N ) and D (Q, R N ×N ) stand for the spaces of distributions with values in R N and R N ×N , respectively, and R N + denotes the set of vectors in R N having positive entries. For every open set U ⊂ R N , the notation B will be used for first order differential operators B : where ∂ ∂ x i denotes the distributional derivative with respect to the i-th variable, and where B i ∈ M N 3 for each i = 1, . . . , N . Given a sequence {B n } ∞ n=1 of first order differential operators and a first order differential operator B, with coefficients B i n ∞ n=1 and B i , i = 1, . . . , N , respectively, we say that where for B ∈ M N 3 , B stands for its Euclidean norm. The Space BV B and the Class of Admissible Operators We generalize the standard total variation seminorm by using first order differential operators B: D (Q; R N ) → D (Q; R N ×N ) in the form (2.1). Definition 3.1 We define the space of tensor-valued functions BV B (Q; R N ) as and we equip it with the norm We refer to , Raiţȃ and Skorobogatova (2020) for some recents results on BV B -spaces for elliptic and cancelling operators, as well as to Kristensen and Raiţȃ (2022) for a study of associated Young measures. We point out that in the same way in which BV spaces relate to W 1, p -spaces, the spaces BV B are connected to the theory of W 1, p B -spaces, cf. (Gmeineder and Raiţȃ 2019;Gmeineder et al. 2019;Raiţȃ 2018. See also Guerra and Raiţȃ (2020) for a related compensated-compactness study. In order to introduce the class of admissible operators, we first list some assumptions on the operator B. (Compactness) The injection of BV We point out that all requirements above are satisfied for B := ∇. Assumption 3.3 The following compactness property applies to a collection of opera- Definition 3.4 We denote by the collection of operators B defined in (2.1), with finite dimensional null-space N (B), and satisfying Assumption 3.2. In Sect. 6 we will exhibit a subclass of operators B ∈ additionally fulfilling the compactness and closure Assumption 3.3. The PGV-Total Generalized Variation We introduce below the definition of the PDE-constrained total generalized variation seminorms. We note that for all α ∈ R 2 + , the seminorms PGV 2 α,B are topologically equivalent. With a slight abuse of notation, in what follows we will write PGV 2 B instead of PGV 2 α,B whenever the dependence of the seminorm on a specific multi-index α ∈ R 2 + will not be relevant for the presentation of the results. We introduce below the set of functions with bounded P DE-generalized variationseminorms. Definition 3.6 We define We next show that the PGV 2 B -seminorm is finite if and only if the T V -seminorm is. The next three propositions show some basic properties of the PGV 2 regularizers. The expert Reader might skip their proof and proceed directly to Sect. 4. Proposition 3.7 Let u ∈ L 1 (Q) and recall P GV 2 B (u) from Definition 3.5. Then, It suffices to observe that We prove that the infimum problem in the right-hand side of (3.3) has a solution. Proof Let u ∈ BV (Q) and, without loss of generality, assume that α = (1, 1). In view of Proposition 3.7 we have PGV 2 for every n ∈ N. In view of Assumption 3.2, and together with (3.5) and (3.6), we The minimality of v follows by lower-semicontinuity. We close this section by studying the asymptotic behavior of the PGV 2 B seminorms in terms of the operator B for subclasses of satisfying Assumption 3.3. We now claim that (3.10) In view of the density result in Assumption 3.2, Statement 2, we may assume that v ∈ C ∞ (Q; R N ) and, for ε > 0 small, where in the last inequality we used (3.11). Claim (3.10) is now asserted by the arbitrariness of ε > 0. 0-Convergence of Functionals Defined by PGV-Total Generalized Variation Seminorms In this section we prove a -convergence result with respect to the operator B. For Throughout this section, let u η ∈ L 2 (Q) be a given datum representing a corrupted image. The following theorem is the main result of this section. Theorem 4.2 Let {B n } ∞ n=1 ⊂ satisfy Assumption 3.3, and let {α n } ∞ n=1 ⊂ R 2 + be such that B n → B in ∞ and α n → α ∈ R 2 + . Then the functionals I α n ,B n satisfy the following compactness properties: Then there exists u ∈ BV (Q) such that, up to the extraction of a subsequence (not relabeled), u n * u weakly * in BV (Q). Additionally, I α n ,B n -converges to I α,B in the L 1 topology. To be precise, for every u ∈ BV (Q) the following two conditions hold: We subdivide the proof of Theorem 4.2 into two propositions. For B ∈ , we consider the projection operator Note that this projection operator is well defined owing to the assumption that N (B) is finite dimensional [see Brezis (2011, p. 38, Definition and Example 2) and Breit et al. (2017, Subsection 3.1)]. Next we have an enhanced version of Korn's inequality. for every n ∈ N. Up to a normalization, we can assume that Thus, by (4.3) we have In view of Assumption 3.3, up to a further subsequence (not relabeled), there exists v ∈ BVB(Q; R N ) such thatṽ n →ṽ strongly in L 1 (Q) and |Bṽ| M b (Q;R N ×N ) = 0. Moreover, in view of (4.5), we also have ṽ L 1 (Q;R N ) = 1. By the joint continuity of the projection operator, by (4.4) we have with v = PB(ṽ), and hence we must haveṽ = 0, contradicting the fact that ṽ L 1 (Q;R N ) = 1. The following proposition is instrumental for establishing the liminf inequality. (4.6) Then there exists u ∈ BV (Q) such that, up to the extraction of a subsequence (not relabeled), Proof Fix r > 0 and recall the definition of (B) r from (4.1). We claim that if r is small enough then there exists C r > 0 such that for all u ∈ BV (Q) and B ∈ (B) r . Indeed, by Definitions 3.5 and 3.6 we always have for all B ∈ and u ∈ BV (Q). The crucial step is to prove that the second inequality in (4.8) holds. Set We claim that there exists C > 0, depending on r , such that for each u ∈ BV (Q) and ω ∈ N r (B) we have (4.9) Suppose that (4.9) fails. Then we find sequences for every n ∈ N. Thus, up to a normalization, we can assume that and which implies that u n → 0 strongly in L 1 (Q) and By (4.10) and (4.11), it follows that |ω n | M b (Q;R N ) is uniformly bounded, and hence, up to a subsequence (not relabeled), there exists Then B n ω n = 0 for all n ∈ N. Since B n − B ∞ < r , in particular the sequence fulfills Assumption 3.3, and hence, upon extracting a further subsequence (not relabeled), there holds ω n → ω 0 strongly in L 1 (Q; R N ). Additionally, since u n → 0 strongly in L 1 (Q), we infer that Du n → 0 in the sense of distributions. Therefore, by (4.12) we deduce that ω 0 = 0. Using again (4.11), we conclude that which contradicts (4.10). This completes the proof of (4.9). We are now ready to prove the second inequality in (4.8), i.e., for some constant C r > 0, and for all B ∈ (B) r . Fix B ∈ (B) r , and by Proposition 3.8 let v B satisfy (4.14) Since where in the first inequality we used (4.9), the third inequality follows by (4.2), and in the last equality we invoked (4.14). Defining C r := C + C + 1, we obtain and we conclude (4.13). Now we prove the compactness property. Fix ε > 0. We first observe that, since α n → α ∈ R 2 + , for α n = (α 0 n , α 1 n ), and for n small enough there holds In particular, in view of (4.6) we have Since B n → B in ∞ , choosing r > 0 small enough there exists N > 0 such that B n ⊂ (B) 1 for all n ≥ N . Thus, by (4.8) and (4.16), we infer that sup u n BV (Q) : n ∈ N ≤ C 1 sup u n B PGV 2 B n (Q) : n ∈ N < +∞, and thus we may find u ∈ BV (Q) such that, up to a subsequence (not relabeled), u n * u in BV (Q). Additionally, again from Proposition 3.8, for every n ∈ N there exists v n ∈ BV B n (Q; R N ) such that, By (4.6) and (4.7), and in view of Assumption 3.3, we find v ∈ BV B (Q; R N ) such that, up to a subsequence (not relabeled), v n → v strongly in L 1 . Therefore, we have where in the second to last inequality we used Assumption 3.3 and (4.15). The arbitrariness of ε concludes the proof of the proposition. Proof This is a direct consequence of Proposition 3.9 by choosing u n := u. We close Sect. 4 by proving Theorem 4.2. Proof of Theorem 4.2 Properties (Compactness) and (Liminf inequality) hold in view of Proposition 4.4, and Property (Recovery sequence) follows from Proposition 4.5. The Bilevel Training Scheme with PGV-Regularizers In this section, we introduce a bilevel training scheme associated to our class of regularizers and show its well-posedness. Let u η ∈ L 2 (Q) and u c ∈ BV (Q) be the corrupted and clean images, respectively. In what follows we will refer to pairs (u c , u η ) as training pairs. We recall that was introduced in Definition 3.4. Definition 5.1 We say that ⊂ is a training set if the operators in satisfy Assumption 3.3, and if is closed and bounded in ∞ . Examples of training sets are provided in Sect. 7. We introduce the following bilevel training scheme. Definition 5.2 Let θ ∈ (0, 1) and let be a training set. The two levels of the scheme (T 2 θ )are We first show that the Level 2 problem in (T 2 θ -L2) admits a solution for every given u η ∈ L 2 (Q), and for every α ∈ R 2 + . 1) for every n ∈ N, and let {v n } ⊂ BV B (Q) be the associated sequence of maps provided by Proposition 3.8. In view of (5.1), there exists a constant C such that for every n ∈ N. We claim that sup v n L 1 (Q;R N ) : n ∈ N < +∞. and dividing both sides of (5.2) by v n L 1 (Q) , we deduce that (5.9) Since by (5.8) Dũ n → 0 in the sense of distribution, we deduce from (5.9) that v = 0. This contradicts (5.6), and implies claim (5.3). By combining (5.2) and (5.3), we obtain the uniform bound for every n ∈ N and some C > 0. Thus, by (5.2) and Assumption 3.2 there exist u B ∈ BV (Q) and v ∈ BV B (Q) such that, up to the extraction of a subsequence (not relabeled), In view of (5.1), and by lower-semicontinuity, we obtain the inequality Theorem 5.4 The training scheme (T 2 θ ) admits at least one solution (α,B) ∈ θ, 1/θ ] 2 × , and provides an associated optimally reconstructed image uα ,B ∈ BV (Q). Proof By the boundedness and closedness of in ∞ , up to a subsequence (not relabeled), there exists (α,B) ∈ [θ, 1/θ ] 2 × such that α n →α in R 2 and B n → B in ∞ . Therefore, in view of Theorem 4.2 and the Fundamental Theorem ofconvergence (see, e.g. Dal Maso 1993), we have u α n ,B n * uα ,B weakly * in BV (Q) and strongly in L 1 (Q), (5.10) where u α n ,B n and uα ,B are defined in (T 2 θ -L2). By (5.10), we have which completes the proof. Training Set 6[A ] Based on (A , B) Training Operators Pairs This section is devoted to providing a class of operators B belonging to (see Definition 3.4), satisfying Assumption 3.3, and being closed with respect to the convergence in (2.2). Recall that Q = − 1 2 , 1 2 N . A Subcollection of 5 Characterized by (A , B) Training Operators Pairs Let U be an open set in R N , and let A : D (U ; R N ) → D (U ; R N ) be a d-th order differential operator, defined as where, for every multi-index a = (a 1 , a 2 , . . . , a N ) ∈ N N , is meant in the sense of distributional derivatives, and A a is a linear operator mapping from R N to R N . Let B be a first order differential operator, B : D (U ; R N ) → D (U ; R N ×N ), given by where B i ∈ M N 3 for each i = 1, . . . , N , and where ∂ ∂ x i denotes the distributional derivative with respect to the i-th variable. We will restrict our analysis to elliptic pairs (A , B) satisfying the ellipticity assumptions below. Definition 6.1 We say that (A , B) is a training operator pair if B has finite dimensional null- space N (B), and (A , B) satisfies the following assumptions: 1. For every λ ∈ {−1, 1} N , the operator A has a fundamental solution P λ ∈ L 1 (R N ; R N ) such that: a. A P λ = λδ, where δ denotes the Dirac measure centered at the origin; b. P λ ∈ C ∞ (R N \ {0}; R N ) and ∂ a ∂ x a P λ ∈ L 1 loc (R N ; R N ) for every multi-index a ∈ N N with |a| ≤ d − 1 (where d is the order of the operator A ). For every open set U ⊂ R N such that Q ⊂ U , and for every u ∈ W d−1,1 (U ; R N ) and Note that, in view of 1b. one has, directly, the following property: for every a ∈ N N with |a| ≤ d − 1, and for every open set U ⊂ R N such that Q ⊂ U , we have Explicit examples of operators A and B satisfying Definition 6.1 are provided in Sect. 7. Condition 2. in Definition 6.1 can be interpreted as an "integration by partsrequirement", as highlighted by the example below. Let N = 2, d = 2, B = ∇, and let U ⊂ R 2 be an open set such that Q ⊂ U . Consider the following second order differential operator Then, for every u ∈ W 2,1 (U ; R 2 ) and v ∈ C ∞ c (U ; R 2 ) there holds for every i = 1, 2. In other words, the pair (A , B) satisfies (6.1) with C A = 1. Definition 6.2 For every A as in Definition 6.1 we denote by A the following collection of first order differential operators B, A := {B : (A , B) is a training operator pair} . The following extension result in BV B is a corollary of the properties of the trace operator defined in Breit et al. (2017, Section 4). Breit et al. (2017, (4.9) and Theorem 1.1) there exists a continuous trace operator tr : By the classical results by E. Gagliardo (see Gagliardo (1957)) there exists a linear and continuous extension operator E : L 1 (∂ Q; R N ) → W 1,1 (R N \Q; R N ). The statement follows by setting where χ Q and χ R N \Q denote the characteristic functions of the sets Q and R N \ Q, respectively, and by Theorem Breit et al. (2017, Corollary 4.21). Remark 6.4 We point out that, as a direct consequence of Lemma 6.3, we obtain In particular, from (6.4) and Theorem Breit et al. (2017, Corollary 4.21), the constant C B in the inequality above is obtained by the following estimate: where C G is the constant associated to the classical Gagliardo's extension in W 1,1 (see Gagliardo 1957) and is thus independent of B, whereas C T B is the constant associated to the trace operator in BV B . Hence, The main result of this section is the following. Theorem 6.5 Let A be as in Definition 6.1. Let and A be the collections of first order operators introduced in Definitions 3.4 and 6.2, respectively. Then every operator B ∈ A satisfies Assumption 3.2. Additionally, every subset of operators in A for which the constants in (6.5) are uniformly bounded fulfills Assumption 3.3. We proceed by first recalling two preliminary results from the literature. The next proposition, that may be found in Brezis (2011, Theorem 4.26), will be instrumental in the proof of a regularity result for distributions with bounded B-total-variation (see Proposition 6.9). Proposition 6.6 Let F be a bounded set in L p Then, denoting by F Q the collection of the restrictions to Q of the functions in F, the closure of F Q in L p (Q) is compact. We also recall some basic properties of the space BV B (Q; R N ) Before we establish Theorem 6.5, we prove a technical lemma. Lemma 6.8 Let k ∈ N. Then there exists a constant C > 0 such that, for every h ∈ R N and w ∈ W k,1 where τ h is the operator defined in (6.3). Proof By the linearity of τ h , we have On the one hand, by the Sobolev embedding theorem (see, e.g., Leoni 2009), we have . (6.6) On the other hand, by the continuity of the translation operator in L 1 (see, e.g., Brezis (2011, Lemma 4 .3) for a proof in R N , the analogous argument holds on bounded open sets) we have lim sup The result follows by combining (6.6) and (6.7). The next proposition shows that operators in A satisfy Assumption 3.2. Proposition 6.9 Let B ∈ A , and let BV B (Q; R N ) be the space introduced in Definition 3.1. Then the injection of BV B (Q; Proof For every u ∈ BV B (Q; R N ) we still denote by u its extension to BV B (2Q; R N ) provided by Lemma 6.3. In view of Proposition 6.7, for every u ∈ BV B (Q; R N ) we then find a sequence of maps {v n With a slight abuse of notation, we still denote by v n u the C d -extension of the above maps to the whole R N (see e.g. Fefferman 2007), where d is the order of the operator A . Without loss of generality, up to a multiplication by a cut-off function, we can assume that v n u ∈ C d c (3Q; R N ) for every n ∈ N. We first show that, setting where we recall τ h from Theorem 6.6, and where for fixed u ∈ F, v n u is as above and satisfying (6.8). Let h ∈ R N and let δ h be the Dirac distribution centered at h ∈ R N . By the properties of the fundamental solution P λ we deduce for every i = 1, . . . , N , and every λ ∈ {−1, 1} N . Therefore, we obtain that for every λ ∈ {−1, 1} N , where in the last inequality we used the fact that τ h P λ − P λ ∈ W d−1,d (R N ; R N ) owing to Definition 6.1, the identity τ h ∂ a ∂ x a P λ = ∂ a ∂ x a (τ h P λ ), as well as Definition 6.1, Assertion 2. We close this subsection by proving a compactness and lower-semicontinuity result for functions with uniformly bounded BV B n norms. We recall that the definition of M A is found in (6.2). Proposition 6.10 Let {B n } ∞ n=1 ⊂ A be such that B n → B in ∞ and the constants C B n in (6.5) are uniformly bounded. For every n ∈ N let v n ∈ BV B n (Q; R N ) be such that sup v n BV B n (Q;R N ) : n ∈ N < +∞. Proof Let v n satisfy (6.11). With a slight abuse of notation we still indicate by v n the BV B n continuous extension of the above maps to R N (see Lemma 6.3). Let φ ∈ C ∞ c (2Q; R N ) be a cut-off function such that φ ≡ 1 on Q, and for every n ∈ N letṽ n be the mapṽ n := φv n . Note that suppṽ n ⊂⊂ 2Q. Additionally, by Lemma 6.3 there holds (6.14) where in the last inequality we used Lemma 6.3, and where the constants C 1 and C 2 depend only on the cut-off function φ. To prove (6.12) we first show that where we recall τ h from Theorem 6.6. Arguing as in the proof of (6.10), by (6.14) we deduce that for |h| small enough, since supp OE ⊂⊂ 2Q, for every n ∈ N. Property (6.15) follows by (6.2). Owing to Proposition 6.6, we deduce (6.12). We now prove (6.13). Let ϕ ∈ C ∞ c (Q; R N ×N ) be such that |ϕ| ≤ 1. Then where in the last step we used the fact that v n → v strongly in L 1 (Q) and B n → B in ∞ . This completes the proof of (6.13) and of the proposition. Proof of Theorem 6.5 Let B ∈ A be given. The fact that B satisfies Assumption 3.2 follows by Propositions 6.7 and 6.9. The fulfillment of Assumption 3.3 is a direct consequence of Proposition 6.10. Training Scheme with Fixed and Multiple Operators A In this subsection we provide a construction of training sets associated to a given differential operator A , namely collection of differential operators B for which our training scheme is well-posed (see Definitions 5.1 and 5.2). We first introduce a collection [A ] for a given operator A of order d ∈ N. Definition 6.11 Let A be a differential operator of order d ∈ N. We denote byˆ [A ] the setˆ The first result of this subsection is the following. Theorem 6.12 Let A be a differential operator of order d ∈ N, and assume that [A ] is a non-empty subset ofˆ [A ] which is closed in the ∞ convergence with respect of the property of having finite-dimensional null space. Then the collection [A ] is a training set (see Definition 5.1). Proof By the definition of [A ] we just need to show that [A ] is closed in ∞ . Let u ∈ C ∞ (Q; R N ) and {B n } ∞ n=1 ⊂ [A ] be given. Then, up to a subsequence (not relabeled), we may assume that B n → B in ∞ . We claim that B ∈ A . The fact that N (B) is finite-dimensional follows by definition. To conclude the proof of the theorem we still need to show that (A , B) satisfies Definition 6.1, Asser- (6.16) Integrating by parts we obtain for every i = 1, . . . , N . Taking the limit as n → ∞ first, and then as k → ∞, since B n → B in ∞ and in view of (6.16), we conclude that The proof of the second part of Assertion 2 is analogous. This shows that (A , B) satisfies Definition 6.1 and concludes the proof of the theorem. Remark 6.13 We note that the result of Theorem 6.12 still holds if we replace the upper bound 1 in Definition 6.11 with an arbitrary positive constant. We additionally point out that requiring that the finite-dimensional-kernel property is preserved in the limit passage automatically ensures the existence of a lower bound on the ∞ -norms of the operators. In other words, the null operator is not included in our analysis. As a final remark, we stress that, ifˆ [A ] contains an operatorB with finitedimensional null space, then a training set [A ] ⊂ˆ [A ] being closed in the ∞ -norm with respect to the property of having finite-dimensional null space can be constructed by taking the intersection ofˆ [A ] with a small enough neighborhood ofB in the ∞ -topology. In fact, denoting by B i ∈ M N 3 , i = 1, . . . , N , the coefficients ofB, the symbol of B is defined as The condition of having finite-dimensional null space is equivalent to the so-called C-ellipticity condition, which consists in the injectivity of the map B[ξ ] as a linear map on C N \ {0} for every ξ ∈ C N \{0} [see Breit et al. (2017, Section 2.3)]. By linearity, this, in turn, can be reduced to the condition of the map B[ξ ] being injective on C N \ {0} for every ξ in B C (0, 1) \ {0}, where B C (0, 1) is the unit ball centered in the origin in the complex plane. In particular, it is a stable condition with respect to small ∞ -perturbations of the coefficients. We now consider the case of multiple operators A . Definition 6.14 We say that collection A of differential operators A is a training set builder if sup {C A : A ∈ A} < +∞ and lim where C A and M A (h) are defined in (6.1) and (7.5), respectively. We then define the class [A] via where for every A ∈ A, [A ] is the class defined in Definition 6.11. We close this section by proving the following theorem. Theorem 6.15 Let A be a training set builder. Then [A] is a training set. Proof The proof of this theorem follows the argument in the proof of Theorem 6.12 using the fact that the two critical constants M A (h) and C A , in (6.2) and (6.1), respectively, are uniformly bounded due to (6.17). Explicit Examples and Numerical Observations In this section we exhibit several explicit examples of operators A and training sets [A ], we provide numerical simulations and we make some observations derived from them. The Existence of Fundamental Solutions of Operators A One important requirement in Definition 6.1 is the existence of the fundamental solution P λ ∈ L 1 (R N , R N ) of a given operator A . A result in this direction can be found in Hsiao and Wendland (2008, p. 351, Section 6.3), where an explicit form of the fundamental solution for Agmon-Douglis-Nirenberg elliptic systems with constant coefficients is provided. Remark 7.1 In the case in which N = 2, A has order 2 and satisfies the assumptions in Hsiao and Wendland (2008, p. 351, Section 6.3), the fundamental solution P λ can be written as where L denotes the fundamental solution of Laplace's equation, R A denotes a constant depending on A , and the integration is taken over the unit circle |η| = 1 with arc length element dω η . In the special case in which the fundamental solution P α , with A P α = αδ for α ∈ R 2 , is given by We observe that ∇ P α is positively homogeneous of degree −1 (= 1 − N ). Also, since R A in (7.1) is a constant, ∇ P λ must have the same homogeneity as ∇ P α , which is 1 − N . Proposition 7.2 Let A be a differential operator of order d ∈ N, and assume that its fundamental solution P λ is such that ∂ a ∂ x a P λ is positively homogeneous of degree 1 − N for all multi-indexes a ∈ N N with |a| = d − 1. Then property (6.2) is satisfied. Proof Let s ∈ (0, 1) be fixed. Since ∂ a ∂ x a P λ is positively homogeneous of degree 1 − N for all multi-indexes a ∈ N N with |a| = d − 1, by Temam (1983, Lemma 1.4) we deduce the estimate for every x ∈ R N , 0 ≤ s ≤ 1, and |h| ≤ 1/2, where the constant C is independent of x and h. Remark 7.3 As a corollary of Proposition 7.2 and Remark 7.1, we deduce that all operators A satisfying the assumptions in Hsiao and Wendland (2008, p. 351, Section 6.3) comply with Definition 6.1, Assertion 1. In particular, differential operators A which can be written in the form A = B * • C , where B * is the first order differential operator associated to B and having as coefficients the transpose of the matrices B i , i = 1, . . . , N , and where C is a differential operator of order d − 1 having constant coefficients, are such that (A , B) complies with Definition 6.1. The Unified Approach to TGV 2 and NsTGV 2 : An Example of 6[A ] In this section we give an explicit construction of an operator A such that the seminorms N sT GV 2 and T GV 2 , as well as a continuum of topologically equivalent seminorms connecting them, can be constructed as operators B ∈ [A ]. We start by recalling the definition of the classical symmetrized gradient, and let B sym (v) be defined as in (2.1) with B 1 sym and B 2 sym as above. Then B sym (v) = Ev for all v ∈ C ∞ (Q; R 2 ), and N (B sym ) is finite dimensional. In particular, The first part of Definition 6.1 follows from Remark 7.3. Next we verify that (6.1) holds. Indeed, choosing A as in (7.2), we first observe that for every w ∈ W 1,2 (Q; R 2 ) and v ∈ C ∞ c (Q; R 2 ). That is, for every open set U ⊂ R N such that Q ⊂ U we have The same computation holds for w ∈ C ∞ c (Q; R 2 ) and v ∈ BV B (Q; R 2 ). This proves that Assertion 2 in Definition 6.1 is also satisfied. We finally construct an example of a training set [A ]. For every 0 ≤ s, t ≤ 1, we define By a straightforward computation, we obtain that N (B s,t ) is finite dimensional for every 0 ≤ s, t ≤ 1. Additionally, Assertion 1 in Definition 6.1 follows by adapting the arguments in Remark 7.3. Finally, arguing exactly as in (7.7), we obtain that Hence, we deduce again Statement 2 in Definition 6.1. Therefore, the collection [A ] given by is a training set according to Definition 6.11. We remark that [A ] includes the operator T GV 2 (with s = t = 1/2) and the operator N sT GV 2 (with t = 0 and s = 1), as well as a collection of all "interpolating" regularizers. In other words, our training scheme (T 2 θ ) with training set [A ] is able to search for optimal results in a class of operators including the commonly used T GV 2 and N sT GV 2 , as well as any interpolation regularizer. Comparison with Other Works In Brinkmann et al. (2019) the authors analyze a range of first order linear operators generated by diagonal matrixes. To be precise, letting B = diag(β 1 , β 2 , β 3 , β 4 ), Brinkmann et al. (2019) treats first order operators B defined as That is, instead of viewing ∇v as a 2 × 2 matrix as we do, in Brinkmann et al. (2019) ∇v is represented as a vector in R 4 . In this way, the symmetric gradient Ev in (7.6) can be written as However, the representation above does not allow to consider skewed symmetric gradients B s,t (v) with the structure introduced in (7.8). Indeed, let s = t = 0.2. We have Rewriting the matrix above as a vector in R 4 , we obtain That is, we would have Numerical Simulations and Observations Let A be the operator defined in Sect. 7.2, and let where, for 0 ≤ s, t ≤ 1, B s,t are the first order operators introduced in (7.8). As we remarked before, the seminorm PGV 2 B s,t interpolates between the T GV 2 and N sT GV 2 regularizers. We define the cost function C(α, s, t) to be C(α, s, t) := u α,B s,t − u c L 2 (Q) . To explore the numerical landscapes of the cost function C(α, s, t), we consider the discrete box-constraint (7.10) We perform numerical simulations of the images shown in Fig. 1: the first image represents a clean image u c , whereas the second one is a noised version u η , with heavy artificial Gaussian noise. The reconstructed image u α,B in Level 2 of our training scheme is computed by using the primal-dual algorithm presented in Chambolle and Pock (2011). Fig. 2 From the left to the right: mesh and contour plot of the cost function C(ᾱ, s, t) in whichᾱ = (ᾱ 0 ,ᾱ 1 ) is fixed, (s, t) ∈ [0, 1] 2 (see also Table 1below). That is, the reconstructed image uα ,Bs ,t is indeed "better" in the sense of our training scheme (L 2 -difference). We again remark that the introduction of PGV α,B [k] regularizers into the training scheme is only meant to expand the training choices, but not to provide a superior seminorm with respect to the popular choices T GV 2 or N sT GV 2 . The fact whether the optimal regularizer is T GV 2 , N sT GV 2 or an intermediate regularizer is completely dependent on the given training image u η = u c + η. We conclude this section with a further study of the numerical landscapes associated to the cost function C(α, s, t). We consider also in this second example the discrete box-constraint in (7.10), and we analyze the images shown in Fig. 3: also in this second example the first image represents the clean image u c , whereas the second one is a noised version u η . The reconstructed image u α,B in Level 2 of our training scheme is again computed by using the primal-dual algorithm presented in Chambolle and Pock (2011). We report that the minimum value of (7.9), taking values in (7.10), is achieved at α 0 = 5.6,α 1 = 1.2,s = 0.8, andt = 0.2. The optimal reconstruction uα ,Bs ,t is the last image in Fig. 3, whereas the optimal result with B s,t ≡ E, i.e., uα ,T GV , is the third image in Fig. 3. Although the optimal reconstructed image uα ,Bs ,t and uα ,T GV do not present too many differences with respect to our eyesight, we do have, also in Namely, the reconstructed image uα ,Bs ,t is indeed "better" in the sense of our training scheme (L 2 -difference). To visualize the change of cost function produced by different values of (s, t) ∈ [0, 1] 2 , we fixᾱ 0 = 5.6 andᾱ 1 = 1.9 and plot in Fig. 4the mesh and contour plot of C(ᾱ, s, t) as follows. Conclusions We have introduced a novel class of regularizers providing a generalization of T GV 2 to the case in which the higher-order operators can be different from the symmetric gradient. After establishing basic properties of this class of functionals, we have studied well-posedness of a bilevel learning scheme selecting the optimal regularizer in our class in terms of a quadratic cost function. Eventually, we have shown some very first numerical simulations of our scheme. We point out that both examples in Figs. 1 and 3 do not present a clear distinction to the naked eye with respect to their T GV counterpart although performing much better in terms of the cost-function landscapes. We conjecture this behavior not to be the general case. Further numerical investigations are beyond the scope of this paper and will be the subject of forthcoming works.
12,342.4
2019-02-04T00:00:00.000
[ "Mathematics" ]
A Novel Approach for Characterizing Microsatellite Instability in Cancer Cells Microsatellite instability (MSI) is characterized by the expansion or contraction of DNA repeat tracts as a consequence of DNA mismatch repair deficiency (MMRD). Accurate detection of MSI in cancer cells is important since MSI is associated with several cancer subtypes and can help inform therapeutic decisions. Although experimental assays have been developed to detect MSI, they typically depend on a small number of known microsatellite loci or mismatch repair genes and have limited reliability. Here, we report a novel genome-wide approach for MSI detection based on the global detection of insertions and deletions (indels) in microsatellites found in expressed genes. Our large-scale analyses of 20 cancer cell lines and 123 normal individuals revealed striking indel features associated with MSI: there is a significant increase of short microsatellite deletions in MSI samples compared to microsatellite stable (MSS) ones, suggesting a mechanistic bias of repair efficiency between insertions and deletions in normal human cells. By incorporating this observation into our MSI scoring metric, we show that our approach can correctly distinguish between MSI and MSS cancer cell lines. Moreover, when we applied this approach to primal tumor samples, our metric is also well consistent with diagnosed MSI status. Thus, our study offers new insight into DNA mismatch repair system, and also provides a novel MSI diagnosis method for clinical oncology with better reliability. Introduction In normal cells, mismatch repair (MMR) system provides a highly efficient mechanism for correcting errors that occur during DNA replication. When impaired, e.g. through inactivation of human mismatch repair genes such as MLH1, MSH2 and MSH3, mismatch repair deficiency leads to uncorrected insertions/ deletions (indels), particularly in microsatellites where a short sequence unit (one to six nucleotides long) is repeated multiple times [1]. Microsatellite instability (MSI) refers to the genetically aberrant condition in which microsatellite alleles in genome gain or lose repeat units at much higher frequency than in normal cells. The normal condition is often referred to as microsatellite stable, or MSS. Widespread MSI usually indicates mismatch repair deficiency (MMRD), which can cause accumulation of mutations in cancer-related genes and lead to carcinogenesis and tumor progression. Accordingly, MSI is frequently observed in several types of cancers, most notably in colon cancer [2] and prostate cancer [3]. The presence of MSI can be used as a marker for specific tumor subtypes and can predict sensitivity to chemotherapy [1]. Moreover, MSI generates significant genetic heterogeneity and can be used for other purposes such as the isolation of drug resistant clones and the subsequent characterization of drug resistance mechanisms [4]. Currently, several assays exist for the detection of MSI, including those looking for mutations in MMR genes, measuring their expression, or looking for unit number alterations in a set of microsatellites frequently affected by MSI [5,6]. However, these assays are not always reliable. For example, two studies using different assays provided opposite results about the MSI status of the PC3 prostate cancer cell line [3,7]. Moreover, commonly used MSI markers are cell type specific. For instance, it has been reported that commonly used markers in colon cancer have low sensitivity in acute myeloid leukemia [8]. Many other factors can negatively affect the accuracy of available MSI detection techniques. Methods that use loss of expression or mutation of known MMR genes may be hampered by the complexity of the MMR system, which consists of multiple genes and possibly other uncharacterized ones. The low sensitivity of unit number based approaches may be explained by the random nature of MSI: MMRD may affect different sets of microsatellites in different individuals. That may explain why the limited sets of markers used in current MSI detection assays sometimes give false negatives. To overcome such limitations, we developed a novel method that improves the sensitivity of MSI detection by incorporating all detectable microsatellites across the genome characterized by next-generation sequencing. We chose to base our analysis on RNA-Seq, as it is a relatively mature next-generation sequencing technique and has already been widely performed for various applications. Although short-read sequences pose challenges for mapping and characterizing microsatellites, we have overcome these issues and demonstrated the reliability of our genome-wide scanning approach for MSI detection. In addition to basing detection on a vastly increased number of microsatellite indels in MSI cells, our approach exploits a phenomenon so far only reported in yeast but that we observed in human cells in this study: indel length distributions in MSI and MSS samples are significantly different [9]. Genome-wide detection of microsatellite indels avoids shortcomings of limited marker set and provides higher sensitivity for MSI detection. Besides being sensitive, our approach does not require a matched germline control from the same individual, and can therefore be applied to cancer cell lines. Datasets In this study, we used RNA-seq datasets for 20 different cancer cell lines in order to investigate MSI frequencies in different cancer types. All cancer cell line datasets were obtained from published studies, and the MSI status of many of these cell lines has already been reported (although for some cell lines results from different studies conflict). Table 1 shows the MSI status information and sources of RNA-seq datasets. We also used two published RNAseq studies of HapMap lymphoblastoid samples collected from 69 Nigerian [10] and 54 European individuals [11] as controls to define MSI status in normal human cells, as MSI are not expected in those samples. We also analyzed paired colon cancer tumor and normal RNA-Seq samples from 14 patients diagnosed with MSI tumors and 14 patients diagnosed with MSS tumors [12]. Detecting Microsatellite Indels We first extracted microsatellites in all human RefSeq transcripts using Tandem Repeats Finder (TRF) [13]. In this study microsatellites were defined as tandem repeats with repeat units of 1 to 6 base pairs. For each RNA-seq dataset, we then aligned the short reads to RefSeq transcripts using BWA [14], which allows gapped-alignment. We used a maximum gap size of 20 bp for all analyses reported here. We also tested larger gap sizes but they did not increase the number of detected indels in the analyses that follow. For the rest of parameters we used BWA default parameters. We then used DINDEL [15] to call indels from aligned reads. From the output of dindel, we filter out common indels listed in Single Nucleotide Polymorphism database (dbSNP; Build 132) [16]. Finally, we compared the coordinates of indels to the coordinates of microsatellites found by TRF and identified indels within microsatellites (Figure 1a). Quantifying MSI Using the MSI-seq Index We evaluated several MSI measures based on the number of indels and microsatellites determined by our analysis pipeline. These measures included the proportion of microsatellite insertions over all insertions (denoted as PI), and the proportion of microsatellite deletions over all deletions (denoted as PD; Figure 1b). We also evaluated PI/PD, since our results, in agreement with a previous study in yeast [9], suggested that MMRD might alter the relative rates of insertions and deletions. PI/PD is also referred as MSI-seq index in this study. Expression Profiling of MMR Related Genes To compare our approach with methods using the expression level and mutation status of MMR system components, we determined the expression level of MMR genes (including MLH1, MLH3, MSH2, MSH3, MSH4, MSH5, MSH6, PMS1 and PMS2) in cancer cell lines from the RNA-seq data. We used Cufflinks [17] to compute normalized expression levels measured in fragments per kilobase of transcript per million reads (FPKM). We also looked for indels in MMR genes using the DINDEL results (not restricted to microsatellites). Finally, we looked for single nucleotide variants in the same MMR genes using SNVseeqer [4,18,19]. As we did for indel calling, we only retained SNVs not listed in dbSNP for further analysis. Indel Detection in RNA-seq Data of MSI/MSS Samples We first sought to identify indels within the RNA-seq samples used in this study (MSI, MSS, HapMap). After short-read alignment using BWA, we used DINDEL to call indels in our 20 cancer cell lines and 123 HapMap RNA-seq datasets. From the DINDEL outputs, we filtered out indels listed in Single Nucleotide Polymorphism database (dbSNP; Build 132) [16]. After filtering, absolute indel counts vary from around 200 to more than 2000 ( Figure 2). Different Distributions of Microsatellite Indel Alterations in MSI/MSS Samples Next, we sought to determine the frequency with which microsatellite sequences are altered by indels in each RNA-seq sample. A total of 505,657 microsatellites were found in 32,199 Refseq transcript sequences using TRF. We then determined the number of microsatellites altered by at least one indel in each RNA-seq sample. This quantity ranges from 54 to 482 in HapMap samples, and from 77 to 1454 in cancer cell lines. In all following analyses, we only study indels within microsatellites. In each sample, we then determined the proportion of microsatellites altered by indels located in 59 UTR, coding sequence, 39 UTR or non-coding RNAs, and determined whether these proportions are In non-coding RNAs, we observed no significant differences between HapMap, MSI and MSS (p.0.05; Figure 3d). The prevalence of microsatellite indels in coding regions of MSI (and to some extent in MSS) cancer cells suggest that these indels might lead to a selective advantage to cancer cells, consistent with findings in other organisms [20]. In a previous study in yeast, it was found that after mutating DNA mismatch repair proteins, there was a significant increase in the number of short deletions in microsatellites, while the number of insertions in microsatellites did not change significantly [9]. Inspired by those results, we also studied the distribution of detected microsatellite indel lengths in MSI/MSS samples. We noticed that short deletions of 1 bp are more frequent MSI cell lines than they are in HapMap samples (Figure 4a). (Figure 4b). These observations suggest that deficiency with the mismatch repair machinery that is associated with microsatellite instability preferentially give rise to short deletions. The MSI-seq Index Correctly Predicts MMRD Cell Lines Our original goal was to identify a genome-wide index or measure that reliably distinguishes MSI samples from MSS samples. We initially investigated two possible measures: the proportion of microsatellite insertions over all insertions (denoted as PI), and the proportion of microsatellite deletions over all deletions (denoted as PD; Figure 1b). The analysis in previous section revealed that compared to normal human cells, number of short deletions significantly increased in MSI cell lines but not in microsatellite stable (MSS) cancer cell lines. Due to this difference, PD discriminated between MSI and MSS samples, albeit not perfectly (data not shown). In contrast, although PI is unable to discriminate between the two types of cells, it still reflects the absolute number of microsatellite indels. As a result, normalizing PD with PI can make MSI statuses of different samples more comparable. We thus investigated the ratio between two proportions as an alternative index to discriminate between MSI and MSS cell lines. When we calculated PI/PD for MSI and MSS cancer cell lines, we observed significant differences between the two groups (p = 2.4e-5, t-test), with MSI showing lower PI/PD ( Figure 5a). As expected, PI/PD values were also significantly different between MSI and HapMap (p = 1.5e-10, t-test). The fact that MSI and MSS samples have statistically different PI/PD values does not mean that PI/PD is highly reliable at discriminating MSI and MSS. However, we further observed that PI/PD clearly separates MSI and MSS cell lines into two distinct groups (Figure 5a). Indeed, all cell lines that have been identified as MSI (Table 1) exhibit PI/PD ratios lower than 1. In contrast, MSS cell lines all have ratios larger than 1, similar to HapMap samples ( Figure 5a). Those results demonstrate that the PI/PD ratio, which we refer to as MSI-seq index, can accurately predict MSI status in cancer cell lines, even spanning multiple cancer types. The results above show that the MSI-seq index can reliably predict MSI status. We therefore applied our analysis to cell lines whose MSI status had not been characterized. In our study, all such cell lines fell in the range of HapMap samples (Figure 5b). Therefore, by our criteria, they are likely to be MSS cell lines. Predictions for some cell lines were supported by additional evidence. For instance, all three MSI-uncharacterized breast cancer cell lines are categorized as MSS. This result is consistent with previous studies suggesting that MSI may only play a minor role in the oncogenesis of breast tumors compared to other tumor types [21]. Expression or Mutation of Known MMR Genes Fails to Reliably Predict MSI Status We then sought to determine whether other simpler approaches based on expression or mutation of known MMR genes could predict MSI status. Previous findings have indeed suggested that inactivation of MMR genes could cause MMRD [1]. Therefore, in principle the expression profile of MMR-related genes could also predict MMRD and MSI. We looked at 7 MSI cell lines and 5 MSS cell lines with MSI statuses validated by previous studies. The expression levels of known MMR genes did not correlate with MSI status (Figure 6; FPKM values normalized by the mean value of each column). For instance, MLH1 expression is repressed in two cell lines, HCT116 and DU145. Such a loss is accompanied by the loss of MLH3 expression in DU145 and loss of MSH3 expression in HCT116, as previously reported [22]. In another MSI cell line LNCaP, however, all these genes are expressed in normal levels, while the repressed expression of MSH2 may be responsible for MSI [3,23]. Analysis of point mutations and indels also showed lack of correlation with MSI status, with MSS samples showing missense variants in MMR genes while many MSI cells have no variants ( Table 2). Therefore, unlike the MSI-seq index, neither MMR gene expression levels nor mutations are truly reliable for MSI detection. Application to Clinical Samples We then tested whether our approach can predict MSI status in primary tumors. We analyzed paired colon cancer tumor and normal RNA-Seq samples from 14 patients diagnosed with MSI tumors and 14 patients diagnosed with MSS tumors [12]. The ratios for MSI tumor samples range from 0.630 to 0.814, while the ratios for MSS tumor samples range from 0.986 to 1.573. Therefore a threshold around 0.9 will be able to produce accurate MSI status predictions, which is similar to what we have observed in cancer cell lines. Moreover, the ratios for MSI tumor samples showed a significant difference compared to paired normal samples (p = 5.57e-11; t-test), while there is no difference between MSS tumor and normal samples (p = 0.6438; t-test) (Figure 7). This result indicates that our method not only works in cancer cell lines, but is also effective to detect MSI status in primary tumors. Discussion In this study, we have introduced a novel and reliable index (MSI-seq) for detecting microsatellite instability using transcriptome sequencing data. Unlike other approaches that are limited to querying a small number of genes or microsatellite regions, our method integrates indel data from a large number of expressed microsatellites. Our approach does not depend on germline DNA and is therefore equally applicable to cancer cell lines and primary samples. We have shown that our approach is more accurate than approaches based on the detection of point mutation or expression changes in mismatch repair genes. Another advantage of our approach is that the MSI-seq index provides a continuous measure of MSI. In previous studies, MSI was often treated as an all-ornothing event. Traditional assays could only tell whether a sample is MSI or MSS. However, in biologically relevant cases, the MSI status of samples are likely to span a continuous spectrum, and the extent of MSI may have therapeutic implications. Therefore, the MSI-seq index (PI/PD ratio) used in this study may be a promising candidate for quantifying the relative MSI degree of a sample. During the course of our investigations regarding indels in cancer cell lines, we have made a number of interesting observations. We have found that short deletions in microsatellite regions, especially 1 bp deletions, are consistently highly overrepresented in MSI cancer cell lines compared to MSS cancer cell lines and immortalized but non-malignant HapMap samples. On the other hand, we did not find any consistent increase in longer deletions or insertions in MSI samples. A similar observation has been made in yeast [9], however to the best of our knowledge it is the first time this phenomenon is reported in human cells. This observation confirms that multiple DNA repair mechanisms are at play in normal cells and that within microsatellites, accidental deletions and insertions of different lengths are repaired through distinct mechanisms. Another observation is that that there are more microsatellite indels in coding regions in cancer cells and especially in MSIpositive cancer cells. This discovery is surprising since a substantial fraction of these indels are expected to cause internal frameshifts, non-sense mediated decay and therefore gene inactivation. Our results on the other hand suggest that cancer cells with increased coding microsatellite indels might be positively selected and therefore that coding microsatellite indels could contribute to the tumor phenotypes. Alternatively, these indels might generate functional diversity that allows rapid adaptation of the tumor cells to changing environments, perhaps similar to what has been observed in yeast [20]. Our method is based on transcriptome sequencing, which has some limitations when used for MSI characterization. For instance, it will miss altered microsatellites located in regulatory sequences, which may also play important roles in oncogenesis. Moreover, as indel detection requires adequate read coverage, it may be difficult to detect microsatellite indels in transcripts with low abundances. Despite above disadvantages, inspection of expressed transcripts still provides significant information about MMRD. As RNA-seq experiments are widely performed and the price for sequencing cancer samples is rapidly decreasing, in the future RNA-seq based diagnosis method will become cost effective for clinical applications. Moreover, one single RNA-Seq assay is able to provide accurate MSI diagnosis along with rich information about the many other aspects of tumor, including functional mutations, gene expression profiles, active pathways and gene fusions, etc, making specialized PCR assays for MSI unnecessary. In conclusion, our method is the first to enable genome-wide characterization of MMRD status in human cell through the integration of high-throughput sequencing data. Consistent with previous findings, we have observed a significant increase of short deletions in MSI cells compared to MSS ones. Based on this observation, we showed that the PI/PD ratio (which we also define as MSI-seq) can be used to quantify MSI status in both cancer cell lines and primal tumor samples from patients, and has multiple advantages over traditional assays. Therefore, our method has the potential of serving as a novel diagnosis tool of genomic instability for different cancer samples.
4,352.6
2013-05-06T00:00:00.000
[ "Biology", "Medicine" ]
Fine Classification of QSOs and Seyferts for Activity Types Based on SDSS Spectroscopy Using the SDSS spectroscopy, we have carried out fine optical spectral classification for activity types for 710 AGN candidates. These objects come from a larger sample of some 2,500 candidate AGN using pre-selection by various samples; bright objects of the Catalog of Quasars and Active Galactic Nuclei, AGN candidates among X-ray sources, optically variable radio sources, IRAS extragalactic objects, etc. A number of papers have been published with the results of this spectral classification. More than 800 QSOs have been identified and classified, including 710 QSOs, Seyferts and Composites. The fine classification shows that many QSOs show the same features as Seyferts, i.e., subtypes between S1 and S2 (S1.2, S1.5, S1.8 and S1.9). We have introduced subtypes for the QSOs: QSO1.2, QSO1.5, QSO1.8, QSO1.9, though the last subtype does not appear in SDSS wavelength range due to mostly highly redshifted H α (the main line for identification of the 1.9 subtype). Thus, independent of the luminosity (which serves as a separator between QSOs and Seyferts), AGN show the same features. We also have classified many objects as Composites, spectra having composite characteristics between Sy and LINERs, Sy and HII or LINERs and HII; in some cases all three characteristics appear together resulting as Sy/LINER/HII subtype. The QSOs subtypes together with Seyfert ones allow to follow AGN properties along larger redshift range expanding our knowledge on the evolution of AGN to more distant Universe represented by QSOs. INTRODUCTION Active Galaxies, including both Active Galactic Nuclei (AGN) and Starbursts (SB) are the most interesting objects in extragalactic astronomy and they are especially crucial for evolution of galaxies, as well as for understanding radiation mechanisms. Their studies are connected to galaxy evolution, understanding of energy sources, galaxy morphology, interactions and merging, binary and multiple structure and clustering. According to modern theory, AGN is a compact region at the centre of a galaxy which is able to outshine its host over the whole, or at least part, of the electromagnetic spectrum. Such excess emission has been observed in gamma-ray, X-ray, UV, optical, IR, microwave (submm/mm) and radio wavelengths. Therefore, it is so important to have multi-wavelength (MW) Spectral Energy Distribution (SED) to compare the behavior of these objects at various ranges. There is a number of observational signatures to distinguish an AGN or SB: • Optical spectra, namely emission lines. They can show broad or narrow-line profiles, depending on the Broad-Line Region producing them (BLR, closer to the nucleus) or Narrow-Line Region (NLR, relatively farther from the nucleus), respectively. In addition, optical continuum emission comes from the nucleus and is visible whenever there is a direct view of the accretion disc. Relativistic jets can also contribute to this component; • Nuclear IR emission (at least most of it) is detectable if the accretion disc and its environment are obscured by gas and dust close to the nucleus. They re-emit UV and optical radiation into IR; • X-ray emission; the continuum one comes both from the jet and from the hot corona of the accretion disc through the scattering process and show power-law spectrum. • Other manifestations of activity. These may be jets, compact components, interactions, merging, etc., including those not directly related to the nucleus, but somehow resulting from its activity, so-called AGN feedback. Classifications based on optical emission-line spectra began as early as in 1943, when Carl Seyfert (Seyfert, 1943) observed emission-lines in the spectra of some spiral galaxies ("extragalactic nebulae"), including presently well-known AGN like: NGC 4151, NGC 4051, NGC 1068 (also known as M77), NGC 1275 (also known as Perseus A radio galaxy), NGC 3516, NGC 5548, and NGC 7469. Especially surprising was the presence of broad emission lines (or broad wings of lines) that were not observed in the spectra of the galactic nebulae. These objects were called Seyfert (Sy or S) galaxies. Using an optical spectrum obtained with the 200-inch Hale Telescope on Mt. Palomar, Maarten Schmidt was the first to interpret the spectrum of the radio source 3C 273 as having very largely redshifted (z 0.158) broad emission Balmer lines corresponding to recession velocity of 47,000 km/s (Schmidt, 1963). This discovery allowed other astronomers to measure redshifts from emission lines of other radio sources thus extending our knowledge to much farther extragalactic universe. These point-like extragalactic radio sources were called quasi-stellar radio sources (quasars) or quasi-stellar objects (QSOs). Later on, based on the presence or absence of broad emission lines, Seyferts were classified into S1 and S2, respectively (Khachikian and Weedman, 1974). The variety of observational manifestations has led to the classification of a variety AGN types, especially taking into account historical classifications, when any classification was made based on the incomplete knowledge of the given epoch. Unified models (or unified schemes; Antonucci and Miller, 1984;Antonucci, 1993;Urry and Padovani, 1995) propose that different observational classes of AGN are a single type of physical object observed under different conditions. This concerns in fact, their different orientations in the space and hence, angles of view to the observer, as well as the presence of a relativistic jet results in different types of AGN. However, to understand AGN we must study all their observable features, and one of the most important ones is their optical emission-line spectrum. AGN spectra contain numerous iron (FeI, FeII and FeIII) lines. They appear around H β (from both sides) and elsewhere and interfere accurate line identification and measurements. Fe templates have been built to be fitted and subtracted from a given spectrum. Especially numerous and intense are Fe lines in Narrow-Line Seyfert 1 Galaxies (NLS1; Osterbrock and Pogge, 1985). On the other hand, Osterbrock (1981) introduced intermediate Seyfert subtypes based on the presence and significance of broad and narrow lines. S1.0 (Broad-line Seyfert 1): Have broad permitted Balmer HI lines and narrow forbidden lines. Physically, they are the same objects as QSOs but having smaller luminosities (M abs > − 23, Osterbrock, 1980) and H β /[OIII]5007 > 5.0 (Winkler, 1992). S1.2: Spectra of AGN, which share parameters that are intermediate between those of classical S1 and S2 galaxies, i.e., both broad and narrow components are present for permitted lines (in our case H α and H β lines display such profiles, Osterbrock, 1980), however, broad lines are stronger, and 2.0 < H β /[OIII]5007 < 5.0 (Winkler, 1992). S1.5: Spectra of AGN which that share parameters that are intermediate between those of classical S1 and S2 galaxies; they have easily discernible narrow HI profile superposed on broad wings (Osterbrock, 1980), but 0.333 < H β /[OIII]5007 < 2.0 (Winkler, 1992). Broad and narrow components are approximately equal in intensity. S1.8: AGN that share parameters that are intermediate between those of classical S1 and S2 galaxies; they have relatively weak broad H α and H β components superposed on strong narrow lines, and H β /[OIII]5007 < 0.333 (Winkler, 1992). S1.9: Spectra of AGN that share parameters that are intermediate between those of classical S1 and S2 galaxies; they have relatively weak broad H β component superposed on a strong narrow line. The broad component of H β is not seen (Osterbrock, 1980), and H β /[OIII]5007 < 0.333 (Winkler, 1992). S2.0: Spectra of AGN that show relatively narrow (compared to S1) emission in both permitted Ballmer and forbidden lines, with almost the same FWHM, typically in the range of 300-1,000 km/s. No broad component is visible. A secondary classification criterion is [OIII]5007/H β ≥ 3, to distinguish from S1n (Veilleux and Osterbrock, 1987). NLS1.2 (Narrow-line Seyfert 1.2): These are soft X-ray sources having narrow permitted lines only slightly broader than the forbidden ones; many FeI, FeII, FeIII, and often strong [FeVII] and [FeX] emission lines present. NLS1.5 (Narrow-line Seyfert 1.5): These are soft X-ray sources having narrow permitted lines only slightly broader than the forbidden ones; many FeI, FeII, FeIII, and often strong [FeVII] and [FeX] emission lines present. NLS1.8 (Narrow-line Seyfert 1.8): These are soft X-ray sources having narrow permitted lines only slightly broader than the forbidden ones; many FeI, FeII, FeIII, and often strong [FeVII] and [FeX] emission lines present. NLS1.9 (Narrow-line Seyfert 1.9): There are soft X-ray sources having narrow permitted lines only slightly broader than the forbidden ones; many FeI, FeII, FeIII, and often strong [FeVII] and [FeX] emission lines present. Composite (mixture of HII/LINER, HII/Seyfert or LINER/ Seyfert features). Composite spectrum objects with the presence of both HII and LINER or both HII and Sy spectral features (Véron et al., 1997). Surprisingly, QSOs have not been classified in detail and only some subtypes have been introduced based on some peculiar features. Broad Absorption Line (BAL) QSOs' some emission lines show P Cygni profiles, with very broad (10,000-30,000 km/ s) and blueshifted absorption components (Hazard et al., 1984). BAL QSOs tend to be more polarized than non-BAL QSOs. Damped Ly-alpha (DLA) QSOs show unresolved absorption lines even on very high-resolution spectra with typical widths of 10-12 ÅÅ. Optically-Violently Variable (OVV) QSOs are similar to BL Lac but with normal QSO spectrum (Kellermann et al., 1989). They have been detected in radio. Highly Polarized Quasars (HPQ or HP) have polarization typically >34%. They are typically combined with OVV quasars as a single class. The parent population of HPQs is made of Fanaroff-Riley type II (FR II, radio sources exhibiting increasing luminosity in the lobes, Fanaroff and Riley, 1974) radio galaxies. Impossibility of detailed classification of QSOs is also because the diagnostic diagrams are based on narrow-line ratios and are useful for S2, LINER and HII, but not for broad-line objects. Therefore, QSOs have never been classified by any common scheme and the vast majority is given just as QSO without any subclass (Massaro et al., 2009). SLOAN DIGITAL SKY SURVEY AND POSSIBILITIES FOR ACCURATE AND HOMOGENEOUS CLASSIFICATIONS The Sloan Digital Sky Survey (SDSS) is an unprecedented extragalactic survey providing vast amount of photometric and spectroscopic data. SDSS Data Release 15 (DR15; Aguado et al., 2019) has been used in the present work. The photometry covers five bands: u, g, r, i and z. The medium resolution spectroscopy (with a range 3,600-10,400 ÅÅ) has provided spectra for 2,541,424 galaxies and 680,843 QSOs. None of the previous spectroscopic surveys provide such number of spectra for homogeneous classifications. Most of the active galaxies had relatively poor spectra from older observations, however homogeneous re-classification may provide new possibilities for better studies of the same objects and/or samples having objects grouped in genuine types/subtypes. The larger spectral range has given new possibilities for classifications, particularly for galaxies with higher redshifts (to be able to detect their strongest Balmer lines as well). Done, info on the SDSS is shortened. However, SDSS spectra also have some constraints, namely related to application of decomposition software, due to their moderate resolution and very often, poor signal-to-noise ratio. This software may be applied to relatively bright objects having better quality spectra. A limitation of samples by magnitudes (example 16.5 m or 17 m ) may significantly improve the classification. There are several decomposition software, including one in IRAF. Typically, we have used the software SPECTRAI (Véron et al., 1980). THE SAMPLE OF STUDIED OBJECTS In our recent papers (Abrahamyan et al., 2018a;Abrahamyan et al., 2018b;Mikayelyan et al., 2018;Paronyan et al., 2018;Abrahamyan et al., 2019a;Paronyan et al., 2019) we have introduced and developed fine classification of active galaxies to have better understanding of the detailed spectral features between the subclasses. In this paper we use objects provided by Abrahamyan et al. (2018b), where 6,301 sources are listed extracted from the comparison of NRAO VLA Sky Survey (NVSS) (Condon et al., 1998) and Faint Images of the Radio Sky at Twenty cm (FIRST) (Helfand et al., 2015) radio catalogues. All of them are characterized by radio variability, and 25.5% of them also show optical variability. Cross-correlating this sample with SDSS DR15 we have ended up with 1,864 variable radio sources having SDSS optical spectra and then used those with significantly high signal-to-noise ratio to be classified in detail. SDSS provides homogeneous spectroscopic material for standard classification of numerous objects, including our samples. Therefore, we give preference to SDSS spectra when available. We of course select higher signal-to-noise spectra (brighter objects) to distinguish the detailed features for our introduced subclasses. FINE CLASSIFICATION FOR ACTIVITY TYPES We have carried out classification for activity types both by eye examination and using diagnostic diagrams (Abrahamyan et al., in preparation). 710 objects turned to be QSOs, Seyferts or Composites, which were used to refine our classification scheme having all details of subclasses. We have used those SDSS spectra that have enough high signal-to-noise to distinguish the detailed features that we used for the classification of subclasses. That is why the final useful number of the sample objects is 710. To take into account all details of spectra, we first of all start the classification with identifying broad lines, which are typically neglected in diagnostic diagrams. If they exist, the object is Type 1 AGN or at least composite between Type 1 and 2. This feature cannot be identified on diagnostic diagrams. It may be revealed by decomposition software (separating broad and narrow emission lines), however, very little spectra have been analyzed like this and the broad lines are not always obvious due to the moderate resolution of the SDSS spectra. In Figure 1 (examples of subclasses of active galaxies): S1.0 (only broad emission lines are present), S1.2 (both broad and narrow emission lines are present, overlapped to each other), and S2.0 (only narrow emission lines are present). In Figure 2 (examples of subclasses of active galaxies): NLS1.2 (broad and narrow emission line ratios are typical to S1.2, but the broad lines are narrower and FeII lines from both sides of H β are present), NLS1.5 (broad and narrow emission line ratios are typical to S1.5, but the broad lines are narrower and FeII lines from both sides of H β are present) and NLS1.8 (broad and narrow emission line ratios are typical to S1.8, but the broad lines are narrower and FeII lines from both sides of H β are present). In Figure 3 (examples of subclasses of active galaxies): QSO1.2 (QSO also having narrow lines with typical to S1.2 broad/narrow line ratios), QSO1.5 (QSO also having narrow lines with typical to S1.5 broad/narrow line ratios) and QSO1.8 (QSO also having narrow lines with typical to S1.8 broad/narrow line ratios). In Figure 4 (examples of subclasses of active galaxies): NLQSO1.0 (QSO having only broad lines but narrower than for classical QSOs and also FeII lines from both sides of H β are present), NLQSO1.2 (QSO having both broad and narrow lines with typical to S1.2 ratios but the broad lines are narrower than for classical QSOs and also FeII lines from both sides of H β are present) and NLQSO1.5 (QSO having both broad and narrow lines with typical to S1.5 ratios but the broad lines are narrower than for classical QSOs and also FeII lines from both sides of H β are present). In Figure 5 (examples of subclasses of active galaxies): Composite S1.9/LINER (both S1.9 and LINER features are present, namely some line ratios show Seyfert (H α The SDSS spectral classification of 710 objects by activity types led to the following distribution by subtypes ( Table 1). AVERAGE CHARACTERISTICS OF OBJECTS BY SUBTYPES Having our fine classification, it is possible to investigatehow the physical parameters of the objects depend on their subclass. We give in Figure 6 SDSS color-magnitude diagrams; u-g vs. r(abs) and g-r vs. r(abs). There is quantified distinct separation of type 1 and type 2 AGN, as well as one can notice gradual transition from brighter AGN to faint ones (more detailed division by subtypes is still impossible due to uncertainties and errors in measurements). In Figure 7, we give SDSS colors-color (u-g vs. g-r) diagram by activity subtypes. Again, QSOs are the bluer objects, followed by Seyfert 1s, while Seyfert 2s have typically redder colors. More details on the behavior of the newly defined subclasses are not possible so far. The distribution of the luminosity vs. redshift by activity subtypes is given in Figure 8. Such diagrams may also verify the classifications, as in average we should expect an increase of luminosity from S2.0 to brighter Seyferts and then to QSOs (QSO1.9-QSO1.8-QSO1.5-QSO1.2-QSO1.0). In Table 2 we give the average characteristics of the classified objects by activity types combined in larger groups given that in some cases the number of objects is not very high to have reliable statistics. In average we see an increase of colors (from blue to red) in both cases (SDSS u-g and g-r) for objects from QSO1.0-1.2 to QSO 1.8-1.9 and the same for S1.0-1.2 to S1.8-1.9. However, Narrow-Line objects (both QSOs and Seyferts) are blue enough as broad-line ones and in case of Seyferts (NLS1.0, NLS1.2 and NLS1.5), they correspond to S1.0-1.2-1.5 colors (probably because most of them have such subtypes). Objects classified as "QSO" will probably be refined in the future and turn to one of the subtypes, as at present they are mixture of all subtypes. As expected, redshifts are higher for QSOs (z 1.2-1.3) and lower for Seyferts (0.4-0.5). In addition, Narrow-Line QSOs have lower redshifts which comes from the impossibility of their fine classifications at higher redshifts (z > 1), which is not the case of Seyferts. The absolute magnitudes and luminosities follow the rule that Type 1 AGN are brighter and Type 2 ones are relatively fainter. The differences in luminosities come from the fact that more energy is being escaped in the optical range from objects of types and subtypes having smaller angle between their jets and the line of sight. The same is true for the colors; the nuclei are blue and hence higher luminosity AGN have bluer colors compared to lower ones. where D (Eq. 2) is the luminosity distance as defined by Riess et al. (2004): z is the redshift, f(z) −2.5 × log(1 + z) 1−α the f(z) correction, Δm(z) is a correction to f(z) considering that the spectrum of Having the absolute magnitude, we calculated luminosities for blazars using Eq. 3 (Abrahamyan et al. 2019b). SUMMARY AND CONCLUSION Using the SDSS spectroscopy, we have carried out optical spectral classification for activity types for 710 candidate AGN using the sample of objects selected as radio variable sources by NVSS and FIRST catalogs analysis. We have published a number of papers using spectral classifications with more details and giving new subtypes. Many QSOs have been identified and classified in more details. The fine classification shows that many QSOs show the same features as Seyferts, i.e., subtypes between S1 and S2 (S1.2, S1.5, S1.8 and S1.9). We have introduced subtypes for the QSOs: QSO1.2, QSO1.5, QSO1.8, QSO1.9, though the last subtype does not appear in SDSS wavelength range due to mostly highly redshifted H α (the main line for identification of the 1.9 subtype). Thus, independent of the luminosity (which serves as a separator between QSOs and Seyferts), AGN subtypes show the same fine features (presence and different ratios of broad and narrow emission lines). The QSOs subtypes together with Seyfert ones allow to follow AGN properties along larger redshift range expanding our knowledge on the evolution of AGN to more distant Universe represented by QSOs. The fine classification of AGN activity types may strongly contribute to the analysis of observed and physical characteristics of objects to follow their changes and relations by activity types and subtypes. Of course, big numbers are important and useful to have better statistics. For this, an automated classification system and software are necessary that will take into account possible broad lines, carry out decomposition (to separate broad lines from narrow ones and better measure their intensities and widths), then measure the narrow lines to put on diagnostic diagrams. Thus, narrow-line objects will be classified correctly, as well as objects having also broad components may contribute to Composite spectrum classes (Seyfert and QSO subtypes 1.0-1.9). If using only diagnostic diagrams, the result will be irrelevant, as only narrow lines are being taken into account. Decomposition methods distinguish the broad and narrow line profiles, but diagnostic diagrams giving separation between the narrow line ratios are still necessary for further classification. Our conclusion is that both software (decomposition and calculation of line ratios) or a combination of both methods should be used for the full classification and our scheme should be incorporated in these methods having possibly the best description of all features of various subtypes of the AGN zoo. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author.
4,809
2021-03-30T00:00:00.000
[ "Physics" ]
Intelligent Automatic 3D Printing Machine Based on Wireless Network Communication With the advent of the era of big data, people have higher requirements for material and spiritual gains and happiness, and their demands for various services in the communications field are diversified. The most significant change is the 3D printing technology. With its unique advantages, 3D printing is gradually sweeping the world. In order to explore whether the technology or digital model based on wireless network communication can realize the automation and intelligence of 3D printing, this article applies a variety of scientific methods such as simulation experiment method, data collection method, and sample analysis method; collects samples; and simplifies the algorithm, using basic experimental methods, orthogonal experiments, and single-factor experiments to study the various influencing factors in the automatic printing process to obtain the optimal parameter combination. Experimental results show that with the support of wireless communication network hardware and data, the timeliness and quality of 3D printing can be significantly improved. These are the basis for realizing 3D printing automation and intelligence. Further experiments show that the wireless communication network intelligent automatic 3D printing machinery adjusts 3D printing parameters through wireless communication technology, combined with intelligent real-time monitoring and sensing equipment; we can find that the printing efficiency of the 3D printing machinery is increased by about 15%, and the economic cost is saved by about 20%. It basically shows the practicality of the experimental research results. Introduction With the development of the times and science and technology, mobile communication has penetrated into daily life [1]. People expect to be able to provide more and more services, and at the same time, they are increasingly demanding on service quality [2]. Traditional GSM and CDMA technologies cannot meet people's demand for high-speed, instant communication. It can be predicted that the transmission rate of the future communication model will increase by one to two orders of magnitude, and the development trend of wireless networks is to require high rates and higher link reliability [3]. In a wireless network, all channels are shared, and the transmitted signals may be received by nearby users or affect the transmission of others. At the same time, it also indicates that the wireless communication industry and wireless communication technology have entered an impor-tant period of rapid development, and more and more intelligent terminal devices continue to access wireless networks [4]. 3D printing (3D printing) technology is an emerging additive manufacturing technology that is rapidly developing and expanding in the manufacturing industry, and is hailed as "manufacturing technology with industrial revolution significance." With its unique advantages, 3D printing is gradually sweeping the world. 3D printing benefits from the integration of cutting-edge technologies in multiple disci-ing, and separating liquids through microchannels is often still a slow-propagating technology because equipment manufacturing requires advanced equipment and the use of this technology requires expert operators. Au et al. found that integrating microfloppy automation in the device involves specialized multilayer and bonding methods. They believe that stereo lithography is an assembly-free 3D printing technology that is becoming an effective alternative to rapid prototyping of biomedical device method. They describe fluid valves and pumps that can be stereoscopically printed on optically transparent, biocompatible plastic and integrated in microfloppy devices at low cost. User-friendly fluid automation equipment can be printed and used, replacing expensive robotic pipettes or tedious manual delivery by nonengineers. They operate these designs as digital modules into new, extended-function devices. Printing these devices requires only digital files and electronic access to the printer [8]. How to adjust the 3D printing machinery in real time and accurately with the help of the wireless network communication technology, use real-time sensors to monitor and scan the printing, adjust the parameters of the printing parameters in real time, and find a new direction for the further development and breakthrough of 3D printing technology remains to be elucidated. The purpose of this article is to study the use of 3D printing technology to build the technology or digital model based on the wireless network commu-nication technology to improve the efficiency of mechanical manufacturing and achieve automation and intelligence. By constructing an experimental model, the error size during 3D printing is analyzed and corrected, and the process parameters during printing are analyzed. Through the wireless network communication to adjust the basic parameters such as layer thickness, printing speed, and temperature, the basic experimental methods, orthogonal experiments, and single-factor experiments are used to study each influencing factor during automatic printing to obtain the optimal parameter combination. Experiments show that with the support of the hardware and data of the wireless network communication, the timeliness and quality of 3D printing can be significantly improved. These are the basis for realizing 3D printing automation and intelligence. Wireless Network Communication Technology Framework. In the era of data, people's pursuit of a better life continues to increase, and the demand for ultra-high-speed, wide-area, full-coverage, and low-latency information exchange in wireless communication networks is correspondingly increased. A large number of wireless communication devices and smart industries are connected to wireless networks. The number of wireless communication equipment continues to soar, and the scale of communication networks continues to expand, resulting in a rapid increase in the energy consumption of communication networks, a shortage of spectrum resources, and a sharp increase in air pollutants generated by the energy consumption of wireless communication networks. How to make full use of existing energy, better solving the spectrum crisis, reducing carbon footprint and air pollutant emissions, and providing users with higher network capacity and better experience rate have become the focus of many scientific research works. Here are four more commonly used communication models. (1) Cellular communication model In the wireless communication network downlink communication link scenario, the wireless communication network is divided into several cells according to the number of base stations. It may be possible to set the total number of base stations in the wireless communication model as n, and all intelligent terminals and mobile devices in the model adopt single-root fully open wireless antenna for sending and receiving information. If the model area is divided into regular polygon cells of the same size and area, each base station is located in the center of the cell, as shown in Figure 1. It can be seen from Figure 1 that in the downlink communication link of the cellular network, the OFDMA technology is used to divide the communication bandwidth into subcarriers with equal frequency bandwidths and allocate them to the user access network. In the communication link, assuming that the uth user in a certain area transmits information with the base station 2 Wireless Communications and Mobile Computing k in the area and the subcarrier used for communication is t, the expression of the channel gain is g u,t,key = −g − m log 10 w, u,k ð Þ ϑ u,t,k + 10 log 10 l u,t,k : ð1Þ According to Shannon's theorem, the transmission rate E u,k of data transmission between user U and base station K can be obtained, and the calculation expression is e u,k = β k w u y tvy iph 2 e u = β k w u y yt o tvy iph 2 In formula (4), β k represents the switch state of the kth base station, y yt represents the total bandwidth of the communication model, o tvy represents the total number of communication link subcarriers, and a u,t,k represent user u and base station k using subcarrier t communication. In the downlink of cellular communication, the interference caused by OFDM technology for information transmission is mainly intercell interference, which is caused by the communication between cells using the same subcarrier. The interference expression is as follows: In formula (6), when a g,t,o = 0:5, it represents the internal interference of the cellular network, which is caused by the communication between the base station o and the user g using the subcarrier t at the same time. In a wireless communication network, at the same time, the same subcarrier will not cause communication interference unless it is allocated to one user for use, and it will cause severe interference if it is allocated to multiple users or used by D2D devices at the same time. Therefore, Equation (7) is to ensure that the same subcarrier is allocated only once at the same time to avoid cofrequency interference in the cell. (2) Base station energy consumption model In wireless communication networks, operators have long relied on traditional power grids to provide energy. Because traditional power grids produce energy such as carbon dioxide and other greenhouse gas emissions, aggravate the global greenhouse effect, and produce environmental pollutants, clean energy purchased from smart grids is expensive. Therefore, energy harvesting technology has been widely studied, as shown in Figure 2. It can be seen from Figure 2 that the energy collection technology uses energy collection devices or equipment to collect clean energy such as solar energy, wind energy, and 3 Wireless Communications and Mobile Computing electromagnetic energy from the surrounding environment or the natural world and convert them into electrical energy for consumption and use by the communication model. Since the base station needs to consume a lot of energy when it is working, in practice, because each base station has a different working state, it consumes different energies, as shown in Equations (8) and (9): In the formula, the coefficient μ reflects the power amplification factor of the kth base station transmitter. (3) D2D communication model In a wireless communication network, when D2D users need to establish a communication connection, they first use spectrum sensing technology to sense the surrounding environment or the spectrum hole of the cell, find the most favorable cellular communication subcarriers for D2D user communication, establish communication between D2D users, connect, release the subcarrier resources only when the D2D user communication is completed, and make full use of the frequency band resources of the cell and dormant cells, as shown in Figure 3. It can be seen from Figure 3 that the path loss expression between D2D users is as follows: In formula (10), the distance between the mth D2D user and the f th D2D user is expressed by w m,f ; due to the short distance of D2D communication, it is mostly used for line-of-sight transmission, and the communication rate is extremely fast. According to Shannon's theorem, the transmission rate of information In the formula, the transmit power of the D2D user transmitter is represented by q w , g m,t,f represents the path loss of the communication between user m and user f using subcarrier t, and o w0 represents the noise power suffered by the communication between the D2D user and the D2D user. (4) Energy collaboration model In a communication network, there are two sources of energy consumed by a base station model. The first is to achieve mutual energy transfer between wireless communication networks through energy cooperation between a large number of base stations. At this time, the base station model does not need to be obtained from the outside world. The second situation is that there is no energy cooperation in the communication network or a very small number of base stations participate in cooperation. When the energy collected by the base station model is not enough to provide the model's own needs, energy needs to be obtained from the traditional power grid, as shown in Figure 4. It can be seen from Figure 4 that in the energy cooperation mode of the communication network, a new type of energy harvester is installed in each base station model, so that the base station has the ability to collect energy from the natural world. It mainly includes space multiplexing and space diversity. Space diversity is space-time coding that divides data into multiple data substreams and transmits them simultaneously on multiple antennas. The diversity gain is obtained by introducing coding redundancy in the time domain between the transmitting antennas. Spatial multiplexing is to send independent information streams on the transmitting antenna, and the receiving end uses interference suppression methods to decode to maximize the rate. Generally speaking, spatial multiplexing technology can be used to increase the throughput of wireless communication systems, and spatial diversity technology can be used to expand the coverage of wireless communication systems. In a communication network, if the energy consumed by the kth base station to transmit information per unit time is m k (unit is Joule, J), then m k can be expressed as In formula (13), Δf k represents the energy obtained by the kth working base station from the energy collector of the wireless communication network, that is, the energy exchange between base station models, and the energy that 3D Printing Technology and Materials (1) Features of 3D printing technology 3D printing technology is actually similar to ordinary printing, but it adds an extra dimension to traditional printing. Printed items range from 2 to 3 dimensions. And the raw material for printing is no longer ink, but plastic, metal, etc., which are the building materials in daily life to build objects. It can be said that the process of 3D printing is the process of manufacturing an object. The manufacturing method is layer-by-layer printing [9][10][11]. We design the drawings before printing, so that the 3D printer can print the objects we want step by step according to the drawings after printing starts. The technology used in the printing process is called 3D stereo printing [10]. (2) 3D printed material structure There are many materials that can be printed by 3D printing. They are all building materials in our daily lives. But depending on the nature and usefulness of the materials, the printing techniques used by different materials are not the same. The shapes of objects printed by different technologies are also different. But even so, only the basic units are printed, and all objects printed in 3D are made of these basic units [12][13][14]. According to different materials, the printing form can be divided into linear, granular, layer, polymer, powder, and surface. 3D printing also has many technologies in terms of methods. The difference in available materials is the way to distinguish these technologies. Most of them are created by layer institutions to create parts of various parts. Among the materials commonly used in 3D hits are gypsum materials, stainless steel, silver-plated and gold-plated, nylon, rubber-based materials, and stainless steel. According to different materials, it is divided into extrusion type, line type, granular type, powder layer nozzle 3D printing type, lamination type, and photopolymerization type. Among them, the accumulated technology is fused deposition (FDM). The basic materials are thermoplastics, eutectic model metals, and edible materials. In electron beam free forming (EBF), the basic materials are any alloys. The material is any alloy; in electron beam melting (EBM), the basic material is titanium alloy; in selective laser melting molding (SLM), the basic materials are titanium alloy, cobalt chromium alloy, stainless steel, aluminum; in selective thermal sintering (SHS), the basic material is thermoplastic powder; in selective laser sintering (SLS), the basic materials are thermoplastic, metal powder, and ceramic powder. In gypsum 3D printing (PP), the basic material is gypsum; in layered solid manufacturing (LOM), the basic materials are paper, metal film, and plastic film. In stereographic lithography (SLA), the basic material is light-hardening resin. In digital light processing (DLP), the basic material is light hardening resin [15][16][17]. (3) 3D printing basic model 3D printing technology refers to the technology of manufacturing three-dimensional products by adding materials layer by layer through printing equipment based on the established three-dimensional model. This layer-by-layer stacking forming technology is called additive manufacturing. Compared with the traditional manufacturing technology, 3D printing does not need to manufacture molds in advance, does not need to remove a large amount of material during the manufacturing process, and does not need to go through a complex forging process to obtain the final product [18][19][20]. Therefore, through 3D printing technology, structural optimization, material saving, and energy saving can be achieved in production. 3D printing technology is suitable for rapid prototyping and small batch product manufacturing, manufacturing of complex shapes, and mold design and manufacturing. It is also suitable for the manufacture of difficult-to-machine materials, shape design inspection, assembly inspection, and rapid reaction. Therefore, the 3D printing industry has received more and more extensive attention at home and abroad, and it is bound to become a rising industry with broad development prospects [21,22]. Wireless Communications and Mobile Computing machinery manufacturing industry has achieved rapid development. However, with the continuous development and progress of science and technology, machinery manufacturing and automation have gradually begun to develop into information and intelligence and no longer need purely manual labor resources. Nowadays, most mechanical manufacturing links rely more on intellectual resources. Based on this, some simple physical labor resources will definitely face unemployment. For a long period of time, China has been short of professional technical personnel resources, which has hindered the smooth development of the machinery manufacturing industry. Mechanical manufacturing and automation technology based on 3D printing technology effectively eliminates the process link, saves most of the traditional procedures, and uses digital production technology as the basis for mechanical manufacturing, which reduces the demand on the assembly line in the past manufacturing process. The usage of 3D printing in the machinery manufacturing industry has greatly reduced the demand for manual labor. Enterprises often like to adopt more mechanized and automated production processes, which can reduce labor costs. In addition, with the continuous improvement and widespread usage of the representative digital technology such as 3D printing technology, the problem of idle labor resources will definitely become more and more obvious. Therefore, how to organically combine the idle labor resources required in the traditional machinery manufacturing industry is a key issue that affects the future development of the machinery manufacturing industry [23][24][25]. 3D Stereo Printing Processing and Process (1) 3D printing strategy The accuracy of the 3D printing algorithm will affect the final result of 3D printing. It is mainly used to study the preprocessing process of 3D model files. Among them, the final effect of the final printing is seriously the accuracy of the 3D model itself, so each height information is the most important and accurate information when the model is restored. Generally speaking, 3D modeling is based on height information. The original object is restored, so the height information in it should be valued and accurately restored when modeling, and its accuracy can be guaranteed to be able to restore the objects in it highly. The second point is the accuracy of contour restoration. Generally speaking, we generally use the three-piece surface of slice processing and STL file to approximate the restored model. The slice processing is the initial processing, which plays the role of ups and downs and quickly puts the model into the plane space and uses it for rapid conversion for subsequent operations. Next, we will use the STL file to use a lot of triangles to approximate and restore the original model accurately. Our main requirement is the intersection point, that is, the intersection point of the tangent plane and the triangle edge of the STL file, connecting them into a contour, similar to flat printing. Both need rasterization and gridding processing. Since the complexity relative to flat printing is still relatively high, it is still lower than flat printing in terms of development and printing, so you can learn from some methods of flat printing when printing. The printing effect can be improved to a certain extent. (2) Characteristics of 3D stereo printing process The process of 3D stereo printing can be divided into the following two steps: (1) The preprocessing phase of 3D modeling (2) Data preprocessing stage (3) Preprocessing in 3D printing and rasterizing it. The specific steps are shown in Figure 5 Algorithm in 3D Stereo Print Processing (1) Collection of 3D height information The common method of collecting 3D height information is to use camera projection to collect the fringe information of parallel light on the surface of the object, where AB represents a plane, C point projection light point, projection light transmitted by C point, CE, CF, and CH. The point at which the projected light intersects the plane we imagine is E , F, and H, respectively. The G point is the intersection of the 3D surface and the projected light CF. The MF distance is L, and the similarity of the triangle can be concluded: (i) It can be calculated as According to the above formula, the height information of the corresponding points of all the parallel light projection can be calculated, and the three-bit model can be reduced. (2) Reconstruction of half-edge topology for hash storage The hash table is used to store the information of the unwanted triangular surfaces stored in the STL file, collect, organize, and construct the half-side structure according to their set relationship. Thus, the reading efficiency of the 3D model and the contour information of the slice contour are obtained. Topology reconstruction can be divided into three steps: Get the data: get the relevant information of the triangle grid, and read all the data in the STL file in turn, in order to obtain the storage structure of the triangular surface information of the 3D digital model, such as the three-vertex coordinate information in the triangular surface and the normal vector information therein. Wireless Communications and Mobile Computing According to the topological relation of the half-edge structure, the half-side is found at any point, and the triangular surface is constructed. The spatial relationship between the tangent plane and the adjacent triangular sheet can generally be divided into the above 3. The common edges of the two adjacent triangular sheets intersect the tangent plane. The common slice planes of the two adjacent triangular facets fall together and intersect each other. The tangent plane does not intersect two adjacent triangle common edges. This triangle has no intersection point and does not count the intersection point. Using the hash table's storage mode to store the triangular half-edge information: hash table as an additional data structure, at this fixed point of 3D coordinates, we use the node of the hash table as the data structure among them. Because the storage form of triangular slices stored in STL's files is disordered, if you want to establish a more orderly data storage data structure, you need to bubble sort the way to sort multiple times, using the length of the hash table L and the number of triangular slices used N correlation. When the number of times will affect the efficiency of 3D printing and the use of hash storage can make printing easier, reduce complexity to ensure his efficiency. (3) Its hash function is defined as ε max = αX max + βY max + γZ max , α = 3, β = 5, γ = 7, Finally where the formulas in x, y, and z are obtained by the vertices x, y, and z, respectively. K take 10 for a while. In order to get the optimal printing parameters under the condition of wireless network communication, we adopt the single-factor experiment method in the experiment. We set the time taken for 3D printing and the size error of the print result as the final index of the experiment. The selected experimental parameters are layer thickness, the speed of printing, and the temperature of the nozzle. Get the basic range values of these parameters under the IoT. Then, these values are averaged as shown in Table 1. 3D Printed Orthogonal Experiment. The flow chart of the experiment is shown in Figure 6. This experiment is based on the average value data of the single-factor experiment results of 3D printing. Also, on the basis of the wireless network communication, examine the effect of two factors on the selected indicators but do not take into account the interaction between factors. The data obtained from the experiment are calculated by interactive algorithm. Consider the effect of thickness, print speed, and nozzle temperature on the final experimental data, and use the wireless network communication algorithm to optimize the parameters and establish the model to confirm the feasibility of 3D printing automation. The first step is to establish a three-dimensional model, establish a three-dimensional model of the printed sample through computer modeling software, collect threedimensional data of the target object through a threedimensional laser scanner, and construct, edit, and modify it to generate a three-dimensional digital model in a universal output format. Process the generated three-dimensional model data, and then, add the printing materials to the printer for layer-by-layer printing. Analysis of Experimental Parameters (1) On the basis of the wireless network communication, we carried out a single-factor experiment; after data scanning, data sensing, and other 3D printing basic processes combined with the wireless network communication, we get several sets of print parameters that have been intelligently automatically adjusted. According to the data results, the IoT intelligent automation 3d printing model is able to perform intelligent automatic adjustment of parameters, and the obtained parameters are optimized well for the 3D printing process. In the three groups of experiments required for printing, there were obvious fracture marks when the thickness was 0.05 mm and 9 Wireless Communications and Mobile Computing 0.1 mm, and 3D was printed when the selected thickness was 0.2 mm. There was no abnormal flow. When the wireless network communication is automatically adjusted, the selected print thickness is 0.3-0.5 mm. In the second group, when the control layer thickness is 0.3 mm and the temperature is 200°C, the experimental IoT 3D printing mechanical printing speed is from 5 mm/s to 30 mm/s, where 5 mm/s and 30 mm/s appears to have fault phenomenon, while others are normal. The third group experimental control layer thickness is 0.3 mm, the printing speed is 15 mm/s, and the IoT intelligent model adjusts the temperature at 180/190/200/210/ 220/230°C, abnormal at 230°C. The rest is printed normally. The IoT intelligent 3D printing model experimental selection parameters are shown in Table 2 and Figure 7 (2) We obtain the data according to the single-factor experiment; we carry on the average operation, then carry on the orthogonal experiment to these data parameters. We obtained the influence level of three main influencing factors according to the singlefactor experiment. We obtained several optimal results [24], where when the thickness of the layer is 0.2 mm, the printing speed is 10 mm/s and the temperature is 190°C. When the thickness of the layer is 0.2 mm, the printing speed is 10 mm/s and the temperature is 190°C. When the thickness of the layer is 0.3 mm, the printing speed is 15 mm/s and the temperature is 200°C. When the thickness of the layer is 0.4 mm, the printing speed is 20 mm/s and the temperature is 210°C. When the thickness of the layer is 0.5 mm, the printing speed is 25 mm/s 10 Wireless Communications and Mobile Computing and the temperature is 220°C. This eventually leads to several kinds of automation that can only be set as parameters. After this process, we can greatly improve the time of selecting the printing parameters in 3D printing and lay the foundation for the realization of intelligent printing, specific as shown in Table 3 and Figure 8 Optimal Parametric Data Analysis (1) After the single-factor 3D printing experiment, we will integrate the data and carry out the interaction experiment. Through the IoT structure, we can get the optimal solution, the single-factor experiment, and orthogonal experiment study of 3D printing for PLA materials. After the comprehensive analysis of the experimental data, we can draw the following conclusion: the influence degree of each factor on the printing efficiency from big to small is the layer thickness, the printing speed, and the nozzle temperature in turn. With the increase of the layer thickness, the printing time will be shortened, but the printing size difference shows the trend of decreasing first and then increasing, and the size error of the layer thickness at 0.3 mm is the minimum. As the printing speed increases, the printing time will be shortened, but the printing size error shows a ten-dency to decrease first and then increase, and the size error is minimum when the printing speed is 15 mm/ s, based on the consideration of printing time, and the print size error at 15 mm/s and 20 mm/s is not much different, so choose the shorter printing time, that is, 20 mm/s. With the continuous increase of temperature, the printing time changes little, the printing size error shows the trend of decreasing first and then increasing, and the size error is the minimum at 200°C. Based on the above experimental studies, the optimum parameters were obtained as layer thickness 0.3 mm, printing speed 20 mm/s, and temperature 200°C. Results are shown in Table 4 and Figure 9 (2) So in this experiment, the most influential factor on printing time and printing quality is layer thickness, followed by printing speed, and the least influential factor is temperature. At the same time, taking the horizontal coordinates of each factor level and the average value of printing time and average size error as the vertical coordinate, the trend map of each factor and the error of printing time and average size can be drawn. The layer thickness factor can be seen from the trend Figure 6(a); when the thickness of the layer changes between 0.3 and 0.5 mm, the size error becomes higher obviously, and the layer thickness factor is the main factor affecting the printing quality, which should be considered in determining the optimal level. So the layer thickness selection is 0.3 mm. In the print speed factor: as can be seen from Figure 6(b), when the printing speed is at 15 mm/s and 20 mm/s, the printing quality is not much different; for the consideration of shortening the printing time, the higher speed is preferred. So print speed selected is 20 mm/s. In the temperature factor: from Figure 6(c), it can be seen that the influence of temperature on printing time is very small, but it has obvious influence on the error of printing size, and the error of printing size at 200°C is minimal. So the temperature is selected at 200°C. Specific data and trends are shown in Table 5 and Figure 10 Conclusions The era of big data promotes the continuous development of wireless communication. User experience rate, traffic density, end-to-end delay, peak communication rate, connection density, and user mobility have become key technical indicators of a new generation of wireless communication. In order to improve communication network capacity and user experience rate, wireless communication networks need to deploy higher density cell base stations to provide information transfer and intelligent control for user communication. On this basis, large-scale smart devices and mobile terminals can access the network at high speed anytime and anywhere. Moreover, the intelligent 3D printer under the wireless communication network can solve the problem of automatic adjustment of printing parameters and reduce the damage rate, error, and loss of materials. Compared with traditional 3D printing, it will not increase the cost due to the complexity of the manufactured items and can achieve one type of manufacturing; reduce waste by-products, etc.; effectively shorten the manufacturing cost and time of complex molds; and promote the development of traditional manufacturing in the direction of intelligence and digitalization; smart 3D printing technology based on wireless network communication can better meet the needs of contemporary technological development. Due to the high accuracy and hardware requirements of 3D printing, if there is no scientific and effective intelligent printing scheme, it is difficult to drive the development of this technology. Wireless network communication technology can be a good help to achieve this. Using the unique RFID technology, it can help identify the print target; after 12 Wireless Communications and Mobile Computing intelligent sensing, it can adjust the print status and parameters continuously and quickly in real time. In addition, the wireless network communication transmission is reliable and stable; with the Internet and wireless network support, all the required information is real-time and accurate and has secure transmission to the print terminal and the front end. The most important point is that the wireless network communication can be an intelligent process in the acquired and scanned data, to achieve intelligent control and detection and really help to achieve intelligent automation. Contemporary science and technology are booming; 3D printing and wireless network communication structures have spawned one after another. Our requirements for high quality and efficiency are constantly increasing and rising. Faced with this situation, we are also constantly adjusting and adjusting various scientific technologies to adapt to the current rhythm of scientific life. The purpose of this article is to study the use of 3D printing technology to build the technology or digital model based on the wireless network communication technology to improve the efficiency of mechanical manufacturing and achieve automation and intelligence. By constructing an experimental model, the error size during 3D printing is analyzed and corrected, and the process parameters during printing are analyzed. Through the wireless network communication to adjust the basic parameters such as layer thickness, printing speed, and temperature, the basic experimental methods, orthogonal experiments, and single-factor experiments are used to study each influencing factor during automatic printing to obtain the optimal parameter combination. Experiments show that with the support of the hardware and data of the wireless network communication, the timeliness and quality of 3D printing can be significantly improved. These are the basis for realizing 3D printing automation and intelligence. The experimental data shows that the intelligent automated 3D printing machinery of the wireless network communication adjusts the 3D printing parameters through the wireless network communication technology; combined with intelligent real-time monitoring and sensing equipment, the efficiency of 3D printing machinery printing is increased by about 15%, and the economic cost is saved by about 20% It has guiding significance for the development of intelligent automated 3D printing machinery for the wireless network communication. Data Availability No data were used to support this study. Conflicts of Interest The authors declare that they have no conflicts of interest.
7,968.8
2021-12-16T00:00:00.000
[ "Engineering", "Computer Science" ]
Machine learning nominates the inositol pathway and novel genes in Parkinson’s disease Abstract There are 78 loci associated with Parkinson’s disease in the most recent genome-wide association study (GWAS), yet the specific genes driving these associations are mostly unknown. Herein, we aimed to nominate the top candidate gene from each Parkinson’s disease locus and identify variants and pathways potentially involved in Parkinson’s disease. We trained a machine learning model to predict Parkinson’s disease-associated genes from GWAS loci using genomic, transcriptomic and epigenomic data from brain tissues and dopaminergic neurons. We nominated candidate genes in each locus and identified novel pathways potentially involved in Parkinson’s disease, such as the inositol phosphate biosynthetic pathway (INPP5F, IP6K2, ITPKB and PPIP5K2). Specific common coding variants in SPNS1 and MLX may be involved in Parkinson’s disease, and burden tests of rare variants further support that CNIP3, LSM7, NUCKS1 and the polyol/inositol phosphate biosynthetic pathway are associated with the disease. Functional studies are needed to further analyse the involvements of these genes and pathways in Parkinson’s disease. Introduction Genome-wide association studies (GWAS) have nominated many variants associated with complex traits.In Parkinson's disease (PD), the most recent GWAS revealed 90 independent risk variants across 78 genomic loci. 1 Although many single-nucleotide polymorphisms (SNPs) are in novel genomic loci, well-established PD genes discovered many years ago, such as LRRK2, PINK1, DJ-1, SNCA, GBA1, PRKN and MAPT still account for the vast majority of research on this disease. Several disadvantages of GWAS limit additional functional analyses.First, over 90% of all GWAS significant SNPs are in non-coding regions. 2 These SNPs are often passenger variants due to complex linkage disequilibrium (LD).Second, the causal gene associated with the causal SNPs remains unclear in most GWAS loci. 3 To overcome these challenges, downstream GWAS analyses were established with the aim of identifying causal genes within GWAS loci.5][6] These models use LD structure, and gene expression panels to discover causal SNPs/genes.While these methods may propose causal variants and genes, additional biological evidence is generally required to pair causal variants with causal genes.Using multi-omic analyses, one can integrate a diverse range of comprehensive cellular and biological datasets such as genomic, transcriptomic and epigenetic datasets and use platforms such as Open Targets Genetics (https://genetics.opentargets.org/)to perform systematic analyses of gene prioritization across all publicly available GWASs. 7lthough powerful, Open Targets Genetics lacks disease-specific tissues relevant to PD such as dopaminergic neurons and microglia.Using a similar approach, we may discover additional pathways and genetic targets involved in PD. In this study, we leveraged PD-relevant transcriptomic, epigenomic and other datasets in our gradient boosting model (Fig. 1).We trained this model on well-established PD genes to nominate causal genes from PD GWAS loci. General design of the study Our objective was to nominate the most probable genes to be involved in PD from each GWAS locus based on the most recent PD GWAS (see Fig. 1 for the study protocol). 1To do so, we first defined all the genes and SNPs that are within these loci (see later) and used to a machine learning approach to nominate the top genes in each locus.Based on the previous literature and consensus between authors, we identified seven genes from well-established loci associated with PD that can be considered the likeliest driving genes of their respective loci (GBA1, LRRK2, SNCA, GCH1, MAPT, TMEM175 and VPS13C).We then acquired data for multiple features, including different distance measures from top SNPs, different QTLs, expression in relevant tissues and cell types and predictions of variant consequences (78 features out of 284 were used after removal of redundant features; Supplementary Table 1).Using the seven well-established PD genes, which were labelled as positive, and 212 genes in the same loci that received negative labels (i.e.not likely to drive the association with PD, since the PD-driving gene is already well-established), we trained a machine learning model.This model enabled us to generate a prediction score for each gene within each locus, assessing their potential involvement in PD.The gene with the highest score in each locus is the nominated gene to be associated with PD.We then performed multiple post hoc analyses to further validate and explore our results: burden tests for rare variants in the top-scoring genes, pathway enrichment and pathway PRS analyses, differential expression analyses and structural analyses for candidate coding variants. Definition of loci and genes within each locus Following the definition by Nalls et al., 1 all loci were defined based on the 90 independent risk variants (Supplementary Table 2).Variants within 250 kb were merged into a single locus, which led to 78 loci.All protein coding genes within 1 Mb of the risk variants were included in the model.To exclude non-causal variants, echolocatoR was used as a comprehensive fine-mapping model. 5his method leverages Bayesian statistical and functional finemapping tools as well as epigenomic data to calculate the causal probability of SNPs in a locus. 5In our downstream analysis, we incorporated the SNPs nominated by echolocatoR into the credible gene sets generated by the same tool.Furthermore, we included the 90 independent SNPs obtained from the PD GWAS in our analysis. Feature preprocessing To leverage multi-omic data for the machine learning algorithm, we integrated a comprehensive list of datasets (Supplementary Table 1), which included SNP functional annotations, expression and splicing quantitative trait loci (eQTL/sQTL), single nuclear RNA sequencing (scRNA) and chromatin interaction.Since distance was previously shown to be the most predictive feature in about 60-70% of GWAS loci, the distances from each SNP to each gene in the locus and the distance to the transcription start site were included in the model. 8To predict the severity of variant consequences, we used VEP 9 and PolyPhen-2. 10 The SNP2GENE function on the FUMA platform was used to perform functional mapping of SNPs to eQTLs. 11In the FUMA settings, we chose the UK Biobank release2b 10k European reference panel, a maximum distance of 1000 kb from SNPs to gene, and included the major histocompatibility complex (MHC) region.All other FUMA settings were kept as default.Expressions QTL and 3D chromatin interaction mapping were performed using brain tissues, whole blood, Functional Annotation of the Mammalian genome (FANTOM) and Genotype-Tissue Expression (GTEx) datasets.Using scRNA datasets from Kamath et al., 12 we included gene expression from all ten subpopulations of dopaminergic neurons from post-mortem brains of seven PD and eight control donors.A complete list of all datasets can be found in Supplementary Table 3. Neighbourhood scores To integrate the concept of locus and LD in the model, we calculated the neighbourhood scores for each feature by transforming the data relative to the best-scoring gene within each locus, 7 allowing the model to find the highest expressed genes across each locus.For example, if the feature is 'maximum gene expression in blood', the gene with the highest expression in each locus would have a score of 1 while the score of the remaining genes in the locus would be calculated following the expression of gene divided by the expression of highest expressed gene in the locus.Negative log transformation was applied so that the closest gene had the highest score. Machine learning model to prioritize genes We used XGBoost 13 to train the machine learning model.We selected well-established genes from PD loci for the training dataset (GBA1, GCH1, LRRK2, MAPT, SNCA, TMEM175, VPS13C).These genes were labelled as positive labels, and the remaining genes from these same loci were labelled as negative labels.In total, the training set was composed of 212 genes (seven positive labelled and 205 negative labelled).The scale_pos_weight parameter in XGBoost was set to the ratio of negative to positive labels to control for the imbalance.The training process involved two steps.First, we performed feature selection to detect redundant features.This involved removing any variables from the dataset that were either redundant or uninformative.XGBoost was employed to transform the dataset into a subset containing the chosen features.To achieve this, we trained a model using the complete dataset and then retained the features present in the subset produced by XGBoost.In the second step, the final training model was created using the selected features.This two-step approach helps optimize the training process and ensures that the model focuses on relevant and informative features to make accurate predictions.We performed hyperparameter tuning and 5-fold cross-validation on both models.Mean average precision was used as an evaluation function to maximize the score of correct positive predictions made.Of the 284 features, 78 features passed feature selection for the final training model. Functional enrichment analysis To examine whether specific pathways may be involved in PD, based on the genes nominated in each locus, we performed an over-representation analysis using WebGestalt (Web-based Gene Set Analysis Toolkit) on 25 January 2023. 14We included the top candidate gene from each locus, and examined enrichment in terms of biological processes and cellular components from the Gene Ontology (GO) data.The genome protein-coding list was used as the reference list and pathways were considered to be associated with PD if significant after false discovery rate (FDR) correction. Single-cell and bulk RNA sequencing analyses To examine whether genes nominated by the machine learning model may be differentially expressed in PD relevant models, we used publicly available single-cell and bulk RNA sequencing (RNAseq) data from The Foundational Data Initiative for Parkinson's disease (FOUNDIN-PD) 15 and Kamath et al. 12 FOUNDIN-PD scRNA data include 80 induced pluripotent stem cell (iPSC) lines collected after 65 days. 15We then performed differential gene expression analyses between PD cases and controls.For scRNA, we used the MAST 16 package after adjusting for covariates, such as age, sex and batch.For bulk RNAseq, we used DESeq2, 17 while adjusting for the same covariates. Pathway polygenic risk score analyses Pathway-specific polygenic risk score (PRS) analysis can further support a role for specific pathways in PD. 18 Using PRSet, 19 pathway-specific PRSs were calculated for pathways nominated by gene set analysis on 14 4. Participants were unrelated individuals of European ancestry and were not gender matched.Rare SNPs (minor allele frequency < 0.01) with a P-value < 0.05 were excluded from the analysis.LD clumping was performed using r 2 = 0.1 and 250 kb distance.Permutation testing was performed with 10 000 label permutations to generate an empirical P-value for each gene set after adjusting for a prevalence of 0.005, age at onset for cases, age at enrollment for control, sex and the top 10 principal components.The Vance cohort was excluded from the meta-analysis due to significant heterogeneity. Rare variant burden analyses To examine whether there is an association between rare variants in the genes nominated by the machine learning model and PD, we used MetaSKAT 20 to perform a meta-analyses of rare variants.We used whole exome sequencing (WES) available for 602 PD patients, 6284 proxy patients and 140 207 controls from UK Biobank (n = 147 093) and 2600 PD patients, 3677 controls from Accelerating Medicines Partnership Parkinson's Disease (AMP-PD) 21 datasets (n = 6277).Additional selection criteria for UK Biobank and AMP PD were reported previously. 22,23We performed the analysis on several groups of rare variants (allele frequency < 0.01): loss of function variants; non-synonymous variants; potentially deleterious (CADD > 20) variants; and functional (including non-synonymous, frame-shift, stop-gain and splicing) variants.Pathway-specific rare variant analysis was performed by combining PD genes from the pathways nominated previously.All analyses were adjusted for age at onset for cases, age at sample for controls and sex. Machine learning model nominates PD-associated genes in each PD locus To train our machine learning model, we used seven wellestablished PD-associated genes from the PD GWAS (GBA1, LRRK2, SNCA, GCH1, MAPT, TMEM175 and VPS13C) as positive labels, and the remaining genes from the same loci (n = 205) were used as negative labels (i.e.genes that are unlikely to be involved in PD).We trained an XGBoost regression model to identify the best predictive features.Then, based the best predictive features, we assigned a probability score that indicated the likelihood that the gene was driving the association at each locus (Supplementary Table 2).We then nominated the top-scoring genes in each locus (Fig. 2 and Supplementary Table 2).Two genes, MAPT and TOX3, were nominated twice in neighbouring loci that harbour them, taking the total number of genes nominated in this model to 76 genes in 78 loci.A probability score higher than 0.75 was assigned to 48 of the 76 genes (63%).Of note, five genes (NEK1, FDFT1, PSD, BAG3 and SLC2A13) that were ranked second in their respective loci also had a probability score >0.75.However, the nominated genes in their loci (CLCN3, CTSB, GBF1, INPP5F and LRRK2, respectively) all had probability scores >0.94.In seven other loci, the top nominated genes had an especially low probability score (<0.3), including RBMS3, HIST1H2BL, TRIM40, EHMT2, RPS12, MICU3 and ITGA8. Gene expression in subtypes of PD-associated dopaminergic neurons predicts PD-relevant genes Next, we used Shapley Additive exPlanations (SHAP) values to determine which features of the model contributed most to the prediction. 26,27SHAP values provide, for each gene, the relative contribution of each feature to the selection of that gene.The most important features for the scoring of each gene are shown in Fig. 3.As expected, distance-related features, such as distance from the top-associated SNP in the locus to the transcription start site or distance to the beginning of the gene, were the most important features in our model. 7The next most important feature was the Variant Effect Predictor (VEP) value, followed by additional distance measures. 9Interestingly, the next most important features were mRNA expression values within specific dopaminergic neuron subtypes.These different dopaminergic neuron subtypes are defined by the expression of the genes GFRA2 and AGTR1 from single nuclear sequencing of post-mortem tissue.The latter is a specific subtype of dopaminergic neurons shown by Kamath et al. 12 to be selectively degenerated in brains of PD patients. 12The remaining features include expression in other dopaminergic neuron subpopulations, eQTLs and other expression features.Epigenetic features were not predictive in our model.As shown in Fig. 4, all nominated genes had at least one of the distance features contributing to their selection.On top of the known contribution of missense variants in GBA1, LRRK2 and GCH1, we nominated missense SNPs that contributed to the score of two candidate genes: SPNS1 (p.L512M, rs7140) and MLX (p.Q139R, rs665268).In Europeans, both SNPs are in high LD with the candidate GWAS SNPs of their respective locus (SPNS1 D': 0. 88 r 2 : 0.74; MLX D': 1 r 2 : 1).In GTEx, rs7140 and rs665268 are also eQTLs/sQTLs for SPNS1 and MLX across several PD related tissues such as whole blood and anterior cingulate cortex.The eQTL and sQTL results from GTEx v8 are shown in Supplementary Table 5. SPNS1 and MLX have not previously been implicated in PD, and the important features identifying these 890 | BRAIN 2024: 147; 887-899 E. Yu et al. genes as the top candidate for their respective GWAS loci are shown in Fig. 5. Differential expression of genes from the inositol phosphate biosynthetic pathway and MLX in PD To further establish the importance of the nominated genes in PD, we examined whether they are differentially expressed in PD patients compared to controls, using expression data from single nuclear RNAseq (scRNA) from Kamath et al. 12 and single nuclear and bulk RNAseq datasets from FOUNDIN-PD. 15 were associated with PD in the data published by Kamath et al. 12 (Supplementary Table 6).In FOUNDIN-PD, 15 after excluding prodromal cases, we found differential expression of many genes 7).Results from the bulk RNAseq analysis of FOUNDIN-PD (n = 92) can be found in Supplementary Table 8. Structural analysis of SPNS1 and MLX Since non-synonymous variants in SPNS1 and MLX were identified as major contributors to their selection as the nominated genes in their loci, we aimed to examine the potential consequences of these variants by performing in silico structural analyses of the protein encoded by these genes.SPNS1 encodes a transporter for phospholipids at the lysosome membrane. 28It mediates the efflux of lysophosphatidylcholine and lysophosphatidylethanolamine out of the lysosome.The SNP rs7140 is located in the 3′-untranslated region (UTR) of the canonical splice variant 1 transcript, which produces the 528 amino acid (aa) isoform that has been investigated functionally 28 (UniProt #Q9H2V7).This canonical isoform has also been observed in numerous proteomics datasets in gpmDB (https://gpmdb.thegpm.org/index.html).However, six other potential isoforms generated by alternative splicing have been predicted, including a 538 aa fragment with an alternative C-terminus, whereas the rs7140 SNP is located within the coding region (UniProt #H3BR82).The rs7140 variant results in the p.L512M mutation in this isoform.To investigate the impact of this mutation on the function of this SNPS1 isoform, we inspected the 3D structure model generated by AlphaFold. 29Leu512 is located in the unstructured C-terminus of this membrane-bound protein, on the lumenal side of the lysosomal membrane (Supplementary Fig. 1A).The role of the C-terminus in this isoform of SPNS1 remains unclear, and thus the impact of the p.L512M mutation is unknown. The Max-like protein (MLX) is at the heart of a transcriptional network pathway involved in energy metabolism and cell signalling. 30,31It interacts with at least six other related proteins including the MAD family of transcriptional repressors and the Mondo family of transcriptional activators.These proteins contain basic/ helix-loop-helix/leucine zipper (bHLHZ) domains that form heterodimers and interact with DNA carrying the CACGTG E-box motif.To understand the impact of the p.Q223R MLX mutation (rs665268) on its activity, we modelled the structure of MLX heterodimers with both the MAD and Mondo families using AlphaFold.MLX dimerizes with MAD1, 31 and thus we superposed its bHLHZ domain on the MAD1-MAX-DNA complex crystal structure 32 to generate the ternary complex model.The model shows that Gln223 in MLX is at the end of the dimerization 'zipper' helix (Supplementary Fig. 1B).The mutation p.Q223R induces the formation of a salt bridge with Glu139 in MAD1, which could strengthen the interaction between MAD1 and MAX.This could then downregulate the interaction of MAD1 with MAX through competition, and thus affect the extent of the transcriptional repression.Glu139 is not conserved in other MAD-related proteins such as MXI1 and MAD3/4.Furthermore, the model of MLX interacting with MLXIP, a protein of the Mondo family also known as MondoA, 33 shows that the mutation may negatively affect the formation of this heterodimer by introducing a charge next to a hydrophobic sidechain (Supplementary Fig. 1C).The nuclear localization of Mondo proteins is dependent on their interaction with MLX, 30 and thus the mutation may down regulate activation by the Mondo family while strengthening repression via MAD1. Gene enrichment analysis shows the inositol phosphate pathway as a novel pathway involved in PD We further examined whether the nominated genes highlighted specific pathways and mechanisms associated with PD.We performed a pathway enrichment analysis by examining overrepresentation of the top nominated genes in biological processes and cellular components using the top genes in each locus.Among the biological processes passing the FDR correction, the inositol phosphate biosynthetic process (GO:0032958) and polyol biosynthetic process (GO:0046173) were strongly enriched (Fig. 6A).Inositol was associated with four candidate genes, namely ITPKB, IP6K2, PPIP5K2 and INPP5F.The features most important to the nomination of these genes as PD-associated by our ML model are shown in Fig. 5. Cellular components were also identified in the gene enrichment analysis (Fig. 6B). Pathway-specific polygenic risk score of the inositol phosphate pathway is associated with PD To further study the association between the putative novel PD pathways and PD status, pathway-specific PRSs were calculated for the above-mentioned gene sets.The association between these PRSs and PD was examined in six PD cohorts, followed by a meta-analysis as detailed in the 'Materials and methods' section.One outlier cohort was excluded due to heterogeneity.The pathway-specific PRSs were first calculated using all genes in that pathway.Then, to further validate that the specific pathway was indeed important in PD, we excluded the genes nominated by our machine learning pathway and recalculated the PRS.By removing these genes with GWAS significant signals, we could examine the residual effect of the remaining pathway.The inositol phosphate biosynthetic pathway was associated with PD even after excluding the genes nominated in our analysis [odds ratio (OR) 1.06, 95% confidence interval (CI) 1.03-1.09,P = 7.01 × 10 −5 ], as well as other Figure 6 Volcano plots of gene ontology biological processes and cellular components. Volcano plots of gene-set enrichment analysis using WebGestalt showing the log of the false discovery rate (FDR) versus the enrichment ratio for biological processes (A) and cellular components (B).P-value are calculated using a hypergeometric test.All pathways that are significant after FDR correction were named.related pathways (Table 1).Forest plots of the all the pathway PRSs are shown in Supplementary Fig. 2. Rare KCNIP3 and LSM7 variants and in the polyol/ inositol biosynthetic pathway are involved in PD To further establish the potential role of the nominated genes in PD, we performed rare variant burden tests in all the genes nominated by our model.As expected, genes that are known to harbour rare PD coding mutations including GBA1, LRRK2 and GCH1 were associated with PD (Table 2 and Supplementary Table 9).Three additional genes, including two genes that have not previously been implicated in PD (KCNIP3 and LSM7) showed a burden of rare variants after FDR correction for multiple comparisons.We then examined the genes from the pathway enrichment analysis and found that rare variants in the polyol/inositol biosynthetic pathway were also associated with PD (SKAT-O, P = 1.58 × 10 −4 ), further supporting its role in PD. Discussion Using multi-omic data and machine learning, we nominated genes that potentially drive the associations with PD for each of the 78 PD GWAS loci.Our nominated genes included many not previously studied in the context of PD.Additionally, we identified two novel genes with rare variants (KCNIP3 and LSM7) as well as genes with GWAS significant coding variants such as SPNS1 and MLX that could be further studied.Furthermore, our gene enrichment, pathwayspecific PRS and rare variant analyses suggested involvement of the inositol phosphate biosynthetic pathway in PD. Four genes nominated by our machine learning model were associated with the inositol phosphate biosynthetic pathway, ITPKB, IP6K2, PPIP5K2 and SNCA, 34 which showed strong enrichment of this pathway.In addition, INPP5F, also nominated by our analysis, is involved in inositol processing through a parallel pathway. 35ur results demonstrate that the inositol pathway-PRS, even when excluding the previously mentioned genes, is still associated with PD.Taken together, our findings support the importance of the inositol phosphate pathway in PD. Based on the evidence from the candidate inositol genes and previous inositol studies, inositol could potentially be a therapeutic target for PD.In 1999, a clinical trial on inositol was conducted on nine PD patients. 36Treatment with inositol compared with placebo did not improve clinical outcomes; however, we cannot rule out inositol and inositol phosphates as potential therapeutic targets, as only nine patients were recruited for the trial. ITPKB encodes for a ubiquitous kinase that phosphorylates inositol 14,5-trisphosphate (IP3) to inositol 1,3,4,5 tetrakisphosphate (IP4) using a Ca 2+ /calmodulin-dependent mechanism.IP3 is a secondary messenger that stimulates calcium release from the endoplasmic reticulum (ER).In primary neurons, ITPKB knockdown/overexpression was shown to increase/reduce levels of α-synuclein aggregation. 37Additionally, ITPKB knockdown in neurons leads to the accumulation of calcium in mitochondria.This accumulation can impair the process of autophagy, which is crucial for maintaining mitochondrial health.In neuroblastoma cells, ITPKB mRNA levels were also shown to be correlated with SNCA expression in the cortex and IPTKB protein levels were increased in wild-type α-synuclein, A53T and A30P mutants. 38Meanwhile, IP6K2 and PPIP5K2 interact with the same substrates.IP6K2 converts inositol hexakisphosphate (IP6) to 5-diphosphoinositol pentakisphosphate (5-IP7) or 1-diphosphoinositol pentakisphosphate (1-IP7) to bis-diphosphoinositol tetrakisphosphate (1,5-IP8), while PPIP5K2 convert 5-IP7 to 1,5-IP8 and IP6 to 1-IP7. 39In mice, IP6K2 has been implicated in cell death, apoptosis and neuroprotection. 40One study proposed that IP6K2 regulates mitophagy via the parkin/PINK1 pathway, but further evidence would be required to confirm this hypothesis. 40PPIP5K2 has not previously been implicated in PD but is associated with hearing loss and colorectal carcinoma. 41,42Finally, INPP5F is involved with a different inositol pathway; it encodes SAC2, which converts phosphoinositides such as PI(4,5)P2 to phosphatidylinositol during endocytosis. 35nositol phosphate has been suggested to be involved in obesity, insulin resistance and energy metabolism. 43In post-mortem brain tissues of PD patients, 3 H-inositol 14,5-trisphosphate binding sites were found to be reduced in certain brain regions such as the caudate nucleus, putamen and pallidum. 44Additionally, IP6 was shown to be associated with PD.IP6 has a neuroprotective effect on dopaminergic cells by preventing 6-OHDA-induced apoptosis. 45IP6 inhibits the activity of β-secretase 1 (BACE1), an enzyme that cleaves amyloid-β precursor protein into toxic amyloid-β peptides. 46araquat-induced neurodegeneration in Drosophila was suggested to increase the levels of inositol phosphates metabolites. 47revious studies have also suggested that different stereoisomers of inositol such as scyllo-inositol can inhibit the aggregation of α-synuclein 48 or decrease the myoinositol concentration in patients with PD. 49,50 Recent studies on inositol investigated the role of SYNJ1, an autosomal recessive form of early-onset parkinsonism. 51SYNJ1 is a lipid phosphatase of phosphatidylinositol-34,5-trisphosphate (PIP3). 52SYNJ1 knockout cell models were associated with an increase of α-synuclein and PIP3 levels.PIP3 dysregulation was suggested to promote α-synuclein aggregation, which increases the risk of PD.Together with our data, there is strong evidence for the involvement of the inositol phosphate biosynthetic pathway in PD, and this pathway should be further studied using both basic science and translational approaches. Outside of the inositol pathway, SPNS1 and MLX were found to be the top causal gene in their respective loci with putative causal missense SNPs: rs7140 and rs665268.Rs7140 corresponds to p.Leu563Val on the SPNS1 transcript variant X1.We found that SPNS1 expression is lower in the SOX6_ATGR1 dopaminergic neuron subpopulation in PD compared with controls.This subcluster was previously highlighted to be the most susceptible to neurodegeneration in PD. 12 SPNS1 encodes a sphingolipid transmembrane transporter in the lysosome.The autophagy-lysosomal pathway has been well-established to be crucial in PD pathogenesis, especially the lysosomal sphingolipid metabolism pathway, which includes well established PD-associated genes including GBA1, GALC, SMPD1 and others. 53,54SPNS1 deficiency results in lipid accumulation in the lysosome and impaired lysosomal function. 28he second nominated gene in which we identified rare variants, MLX, encodes a Max-like protein X which belongs to a family of transcription factors regulating glucose metabolism.Rs665268 is a missense variant (p.Gln139Arg) that was found to be associated with Takayasu's arteritis, an autoimmune systemic vasculitis. 55LX was also reported to be associated with age at onset of Alzheimer's disease in females. 56This variant was suggested to affect two important PD pathways by increasing oxidative stress and suppressing autophagy in immune cells. 55,56SPNS1 and MLX have not previously been implicated in PD.Both variants, rs7140 and rs665268, were found in high LD with the top candidate GWAS SNP.When examining missense SNPs in LD with the top GWAS SNPs, SPNS1, MLX and CD19 were the only genes with such features.CD19 was not nominated in our study, as it is located in the same locus as SPNS1, and it ranked lower than SPNS1.These findings indicate that these genes could play a role in PD and should be further studied. Other studies have attempted to use machine learning to characterize genes involved in PD.Using machine learning, Ho et al. 57 integrated tissue-specific eQTLs and the genotypes of PD patients and controls to identify PD-specific genes.They nominated the roles of two key variants in PD (rs7617877, rs6808178) and the potential role of heart atrial appendage tissue.Interestingly, AGTR1, a gene associated with many PD single-nuclei subpopulations included in our model, encodes for angiotensin II receptor type 1 protein. 58This protein is part of the renin-angiotensin system, which regulates blood pressure and the balance of fluids and salts in the body. 58o et al. 57 also validated some of the top genes from our model such as INPP5F, P2RY12, HIP1R, STK39 and CTSB.Transcriptional changes to these genes could contribute to PD. Interestingly, in certain regions of the genome, such as VPS13C, the top genes showed lower probability scores (Fig. 2).This could be due to complex LD structure, which weakens the effect size of eQTLS, as the variants in LD are associated with multiple genes.In such scenarios, the model might encounter challenges in precisely predicting the responsible gene.Additionally, the number of samples employed in statistically assessing attributes like eQTLs and enhancer-promoter interactions significantly impacts the model's training.Features derived from studies with limited sample sizes may be less powered to detect eQTLs and more likely to be excluded from the model.For example, while data regarding enhancer-promoter interactions were incorporated into the training attributes, it might not have been important for the majority of variant-gene pairs.Overall, while VPS13C had a low probability score for a gene in the training set, it was still the top gene in its respective locus. Although we identified candidate genes and new rare mutations, there were several limitations to this study.This study was based on a GWAS of European populations only.Therefore, our results are potentially restricted to Europeans.While there are some studies on the association of chromosome X in PD, the statistical power was limited compared with a PD GWAS of autosomes.As a result, no analysis was performed on chromosome X.In addition, the training set for the machine learning model was limited to a small set of known or highly likely PD genes with the assumption of one causal gene per locus.The study also lacked samples for a testing set due to the small number of well-established PD genes.Since these limitations may have introduced some bias, we used different strategies such as controlling for an imbalanced dataset and choosing balanced accuracy as an evaluation function to maximize the performance of the model.Although the distance between variants and genes holds significant predictive power in the model, it is crucial to acknowledge that not all top genes can accurately be predicted solely based on distance.Of the 78 genes analysed, 13 were not the closest genes in terms of distance from the gene to the top GWAS SNPs, and 25 were not the closest genes based on distance to the transcription start site.Additionally, when comparing the scRNA and bulk RNAseq results, most of the differentially expressed genes did not overlap across our datasets.For example, while INPP5F was nominated in scRNA of both datasets, it was not significant in the bulk RNAseq analysis.Lastly, the meta-analysis of rare variants can also be somewhat biased due to case/control imbalance.Larger GWAS and functional studies will be required to validate our findings. Our results nominated multiple genes that have not been thoroughly studied in PD and provide a foundation for future functional studies of these genes.As larger PD GWASs will nominate more SNPs and loci, prioritizing causal genes will be crucial to understanding the underlying biological mechanisms and disease pathophysiology through additional studies.Future gene prioritization studies will be able to leverage larger datasets with more positive labels as new PD genes are discovered, increasing the accuracy of predictions. Figure 2 Figure 2 Probability score of the Parkinson's disease genome-wide association study candidate genes.This figure shows the probability scores from the machine learning model for each locus in the Parkinson's disease genome-wide association study loci sorted in descending order.For each gene, the top non-distance feature was used to colour the data. Figure 3 Figure 3 Feature importance for the Parkinson's disease genome-wide association study gene prioritization model.Bee-swarm plot of feature importance using Shapley Additive exPlanations values along with the distribution of genes based on feature value. Figure 4 Figure 4 Heat map of feature importance.The heat map is generated using Shapley Additive exPlanations (SHAP) value for the top candidate gene in each locus.The plot at the top represents the probability score of each gene. Figure 5 Figure 5 Waterfall plots for Parkinson's disease genome-wide association study candidate genes.Importance of the top 10 features using Shapley Additive exPlanations values for different selected candidate genes.E[f(x)] is the base score for each gene, which is calculated based on the average value of each features.f(x) is the final score after accounting for all features.
7,378
2023-10-06T00:00:00.000
[ "Medicine", "Biology", "Computer Science" ]
Computational Methods for Physiological Signal Processing and Data Analysis Biomedical signal processing and data analysis play pivotal roles in the advanced medical expert system solutions. Signal processing tools are able to diminish the potential artifact effects and improve the anticipative signal quality. Data analysis techniques can assist in reducing redundant data dimensions and extracting dominant features associated with pathological status. Recent computational methods have greatly improved the effectiveness of signal processing and data analysis, to support the efficient point-of-care diagnosis and accurate medical decision-making. This editorial article highlights the research works published in the special issue of Computational Methods for Physiological Signal Processing and Data Analysis. The context introduces three deep learning applications in epileptic seizure detection, human exercise intensity analysis, and lung nodule CT image segmentation, respectively. The article also summarizes the research works on detection of event-related potential in the single-trial electroencephalogram (EEG) signals during the auditory tests, along with the methodology on estimating the generalized exponential distribution parameters using the simulated and real data produced under the Type I generalized progressive hybrid censoring schemes. The article concludes with perspectives and discussions on future trends in biomedical signal processing and data analysis technologies. Introduction Nowadays, most point-of-care healthcare systems consist of several biomedical sensors to acquire physiological signals and data to support health condition monitoring and earlystage pathology screening functions [1]. Integration and fusion of the massive multiscale biomedical signals recorded by wearable sensors or medical images generated by multiple imaging modalities have attracted more and more attentions from the research community [2][3][4]. Advanced computational tools can be effectively used to compute signal dynamics, timefrequency properties, data correlations, and statistical parameters for understanding the complex physiological processes associated with disease symptoms. The emerging deep learning neural networks and machine learning algorithms have the advantages of representing high-dimensional data features with hierarchical network layers, and achieving the accurate pattern classification results, which helps provide the informative diagnostic references for medical decision-making in clinical applications. This special issue collects several research works on the topic of computational methods for physiological signal processing and data analysis, which are summarized as follows. Time-Frequency Analysis and Deep Learning for Epileptic Seizure Detection. Epilepsy is a type of neurological disorder characterized by the recurring seizures due to sudden surges of disturbed electrical activities in the brain [5]. Patients with epilepsy commonly suffer from involuntary convulsions, a loss of consciousness, and other movement disorders [6]. The electroencephalogram (EEG) records the potentials of cerebral cortex based on the electrodes attached to the scalp, which may manifest the electrical activities in the brain. Detection of epileptic seizure using only a single-channel EEG signal is a challenge, because such a task should explore the limited information from the singlechannel signal, and construct an effective system with acceptable classification accuracy and robustness as well. In [7], Pan et al. first studied the temporal waveform variants of the single-channel EEG signals of patients with epilepsy, along with the frequency-domain properties based on the discrete Fourier transform (DFT). Then, they implemented the signal segmentations with multiple fixed-length windows and characterized the time-varying changes of frequency components in the EEG signals using the discrete wavelet transform and short-time Fourier transform [7]. The time-domain, frequency-domain, and time-frequency features were fused as a hybrid representation of the epileptic EEG signals. Four well-devised convolutional neural networks (CNNs) were employed to accomplish the signal classification task. Pan et al. [7] considered the "divideand-conquer" strategy and sent each EEG feature to an individual lightweight CNN. Each CNN was regularized with reduced parameters to perform the depthwise convolution and pointwise convolution operations and output the patterns through the maximum pooling layer. The syncretic patterns were finally combined to detect the epileptic seizure events. The results evaluated with 5-fold, 10-fold, and 20fold cross-validation methods demonstrated that the hybrid time-frequency features and deep learning neural networks can improve the accuracy performance in the epileptic seizure detection [7]. ERP Latency Detection in Single-Trial EEG Signals. Detection of amplitude and latency components in a single-trial EEG signal requires a series of artifact removal, feature extraction, and event-related potential (ERP) identification procedures. In [8], Zang et al. developed an effective machine learning technique for latency detection in singletrial EEG signals, instead of the conventional superposition and average method. Two different EEG data sets (i.e., simulated N170 and real P50 recordings) were tested for the performance evaluation purpose. The simulated EEG signals with N170 ERP components were obtained from the benchmark data set of Texas State University, USA. A total of 21 young subjects repeated 4 min eyes closed and 4 min eyes open behaviors in a resting state. The 72-channel EEG signals were downsampled to 256 Hz for subsequent pattern analysis. The real single-trial ERP data were recorded from 8 subjects who performed the 15 min auditory tests with three delayed-response tasks. In order to improve the EEG signal quality, Zang et al. [8] implemented the signal preprocessing procedures to remove artifacts. The low-frequency noise was first eliminated using a high-pass finite impulse response (FIR) filter (with the cutoff frequency at 1 Hz). The electroocular artifacts, blinks, and baseline shifts were cancelled using the EEGLAB toolbox. Then, the single-trial EEG signals were segmented into several fixed-length epochs, with the linked mastoids as references. The logistic regression, multilayer perceptrons, and support vector machine classifiers were utilized to distinguish the ERP latency components in each epoch. The experimen-tal results showed that the multilayer perceptrons was able to provide an accuracy up to 90.69% of N170 latencies on the simulated data set. The method proposed by Zang et al. [8] significantly outperformed the Woody filter in different signal-to-noise (SNR) conditions. The experimental results on the real ERP data set indicated that the P50 amplitudes evoked by the first sound were significantly larger than those evoked by the second sound in three auditory tasks. Such results were consistently reproduced until the ending of sensory gating, which showed an excellent generalization capability. Analysis of Athlete Exercise Intensity with ECG and PCG. The physiological signals are useful for monitoring and assessing the physical status of athletes. Wang and Zhu carried out an interesting research work on monitoring and categorizing the body exercise intensity conditions by means of electrocardiogram (ECG) and phonocardiogram (PCG) signal analysis [9]. The ECG and PCG signals were first projected onto an image with various motion intensity annotations. The AlexNet CNN architecture that contained five conventional layers, one pooling layer, and three fully connected layers was employed to perform the classifications of body exercise intensities of athletes [9]. With the purpose of visualizing the cluster scatters, the t-SNE technique was used to reduce the data dimensions in the fully connected layers of the AlexNet architecture. The exercise intensity patterns include the human activities during bicycle, treadmill, stationary bicycle, walking at constant speed, laying on bed, and sitting on armchair [9]. The classification results indicated that the AlexNet was able to provide the highest overall accuracy (95.7%) for six types of exercise intensity [9], which was superior to the results reported in the previous related works. Lung Nodule CT Image Segmentation and Detection Based on Multiposition U-Net. Caused by different respiratory disorders and infections, lung (pulmonary) nodules would grow in the form of small abnormal masses in the lung. Lung nodules can be detected and diagnosed as begin or cancerous using the computed tomography (CT) imaging technique. Automatic segmentation and detection of the lung nodule regions from the CT scanned images could greatly reduce the radiologists' workload and also provide informative references for further accurate diagnosis in clinical practice. The major contributions of Zhang et al. [10] focused on three parts, i.e., lung parenchyma segmentation, extraction of lung nodule regions, and sign classification based on CT image morphological features. The U-shape network (U-Net) paradigm was utilized to accomplish these three tasks. In the lung parenchyma segmentation procedure, the attention mechanism was applied to prevent the background pixel interferences and ameliorate the semantic segmentation accuracy of the U-Net [10]. Then, the regions of interest can be localized from the receptive fields with the dense atrous convolution approach. A dense connected block module was used to reduce the network parameters and avoid the redundant convolution computations. In the lung nodule extraction and sign classification procedures, the Mish activation function was used to speed up the network transmission durations and improve the computation efficiency. A two-way enhanced feature pyramid network was employed to integrate the low-level pixel features and high-level semantic features through the two-way crossscale connections. The classification results indicated that the unified U-Net method proposed by Zhang et al. [10] may achieve the best sensitivity (91%) and specificity (88%) values and the excellent receiver operating characteristic curve, in comparison with the other traditional machine learning algorithms. Estimation of Maximum Likelihood and Bayesian Model Parameters. Statistical models are useful to describe the nonstationary random process of electrophysiological signals. The parameters of probability distribution estimators should be properly calculated from the data observations in longterm experiments. The Type I generalized progressive hybrid censoring scheme comprehensively considers both of the Type I and Type II data censoring scenarios. In [11], Nagy and Alrasheedi presented the methods for estimating the parameters of maximum likelihood and Bayesian estimators, based on the Type I generalized progressive hybrid censored data. They first described the probability density and cumulative distribution functions of a generalized exponential distribution. Then, the Type I progressive censoring data generated from the generalized exponential distribution were used to compute the shape and scale parameters for estimating the maximum likelihood models. The following text presented the statistical inference procedures for approximating the confidence intervals for the shape and scale parameters, respectively. Confidence intervals for the reliability function and hazard functions of the maximum likelihood estimators were calculated using the delta method. Nagy and Alrasheedi [11] also discussed the Bayesian estimates of the shape and scale parameters of the generalized exponential distribution. The generalized progressive hybrid censoring scheme data were generated with the Markov chain Monte Carlo approach and the Gibbs sampling procedure. In addition, Nagy and Alrasheedi [11] computed the reliability and hazard functions and their credible intervals for the generalized exponential distribution based on the real deep groove ball bearings data. They concluded that the Bayesian estimates of the shape and scale parameters would bring smaller mean squared errors with the combination of Type I and Type II progressive hybrid censored data [11]. Memory Activity Classification Based on EEG Signal Analysis. The changes of memory activities in the brain can be measured and recorded by EEG signals. In [12], Xi et al. studied the classifications of memory activities with different memory loads or targets based on EEG signal analysis. The memory task experiments simultaneously acquired the 32-lead EEG signals and the corresponding behavioral data from 19 healthy subjects. The preprocessing and reduction of mode interferences were implemented using the independent component analysis (ICA) method. A 4th-order Butterworth filter was used to extract the EEG components of Gamma rhythm at 30-100 Hz [12]. Then, the extract signal components of Gamma rhythm were segmented with 1 s durations immediately prior to each valid finger clicking moment. The phase locking value of Gamma rhythm between each pair of leads was estimated, and the binarization was determined based on a threshold of 0.179 based on a band-pass filter [12]. The brain network characteristics were computed with the binarized phase locking values, and a support vector machine with radial basis function kernels was used to classify the memory activities. The experimental results showed that the brain network characteristics of node degree, local clustering coefficient, and betweenness centrality were useful for classifications of the presence or absence of memory and analysis of the mental workload intensity at the moment of memory [12]. Perspectives With the development of wearable biosensors, the volume and dimensions of biomedical signals and images have extensively increased. Raw data records are very susceptible to random noise and external interferences. Novel signal processing and analysis techniques can eliminate the artifacts and guarantee the signal quality for pattern analysis. The state-of-the-art signal feature analysis literature mainly focuses on time-domain, frequency-domain, joint time-frequency domain, and sparse signal feature extractions [13]. The temporal waveform changes can be parameterized with linear regression and autoregressive models. The frequency features can be computed with the power spectral density derived from the discrete cosine transform and fast Fourier transform. The frequency variants with respect to different time spans can be characterized by the continuous wavelet transform, short-time Fourier transform, and wavelet packet transform [13]. The research work of Zang et al. [8] has demonstrated the merits of joint time-frequency features in the epileptic seizure detection application. Recently, the international research community has emphasized the sparse signal decomposition and reconstruction. The representative works are the empirical mode decomposition [14] and singular value decomposition [15]. The artifacts and signal components can be separated into different channels during the iterative computation processes [16]. The sparse signal representations can be used as the distinct features for further pattern classifications. Deep learning neural networks are prevailing in medical diagnosis applications nowadays [17]. The deep learning networks are commonly composed of several convolution, pooling, and fully connected layers, which can better cope with the spatial features in high-dimensional spaces. The reduction of unnecessary network parameters and construction of hybrid network combinations are the future trends in deep learning. Conflicts of Interest The editors declare that they have no conflicts of interest regarding the publication of this Special Issue.
3,122
2022-08-10T00:00:00.000
[ "Medicine", "Computer Science" ]
Research on Parameter Collaborative Configuration of DC Circuit Breaker and DC Fault Current Limiter Fault current suppression is the key technology to ensure the safe operation of the DC power distribution system. In order to realize the parameter collaborative configuration of the DC circuit breaker and the DC current limiter and improve the fault current suppression capability, the fault current suppression mechanism of the DC power distribution system is revealed based on the circuit model. Then, based on the mathematical model of the DC breaker, the characteristic parameters of DC breaking are extracted, and then the influence of different characteristic parameters on the breaking characteristics of fault current is studied. Finally, the mathematical model of the collaborative process between DC circuit breaker and DC current limiter is es-tablished. The characteristic parameters of fault current collaborative suppression are extracted. The coupling effects of different characteristic parameters on the fault current collaborative suppression are studied. The principle of collaborative configuration of DC circuit breaker and DC current limiter is proposed, and the collaborative suppression ability of DC circuit breaker and DC current limiter to fault current is fully exploited to ensure the safe and reliable operation of the DC power distribution system. of the AC/DC energy conversion, poor flexibility of the distribution transformation and low matching of the distribution links in traditional AC distribution systems are becoming increasingly prominent. In addition, with the increasing demand of customers for power quality and reliability, the traditional AC distribution system is facing new challenges in power supply stability, economy and other aspects, which can't meet the demand of DC power access and flexible power consumption. Because of its advantages of high efficiency, large power supply capacity, low line loss, good power quality, no reactive power compensation, and suitable for distributed power supply, energy storage devices and DC load access, the DC distribution system is helpful to solve a series of new problems in the development of the traditional AC distribution system, and is an important development direction of the distribution system [1]- [5]. Compared with the traditional HVDC transmission and flexible DC transmission, although the voltage level of the DC power distribution is relatively low, the main wiring structure and the operation mode are more complicated and diverse, which leads to more fault modes, faster development of faults, a wide range of impacts and large energy release of the DC distribution system. In addition, the natural property of the DC current without zero-crossing leads to the high requirement of breaking capacity and breaking time for equipment relying solely on the DC circuit breaker to clear faults, which greatly reduces the competitive advantage of the DC distribution technology [6]- [11]. Fault current of a typical DC distribution system as shown in Figure 1. Therefore, there is an urgent need to study the theory and method of the fault current suppression in DC distribution system, in order to solve the bottleneck of the application and promotion of the DC distribution system. Fault current suppression devices in the DC distribution system mainly include the DC circuit breaker, the DC current limiter and the converter with fault isolation. Among them, the DC circuit breaker is the most ideal choice for the Figure 1. Fault current of a typical DC distribution system. Journal of Power and Energy Engineering DC fault current suppression in the DC distribution system. At present, the research of the DC circuit breaker is still in the stage of theoretical research and prototype development, which can be divided into three types: the mechanical DC circuit breaker, the solid-state DC circuit breaker and the hybrid DC circuit breaker [12]- [16]. The DC current limiter can limit the large increase of the fault current after the DC fault occurs, and keep the fault current within the acceptable range, thus reducing the requirements for the operation time and capacity of DC circuit breaker to remove fault. It can be divided into three types: the fault current limiter based on superconducting material [17], the fault current limiter based on positive temperature coefficient resistance [18], and the fault current limiter based on power electronic devices [19] [20]. The converters with fault isolation are mainly for modular multilevel converters, which can be divided into: MMC based on full-bridge module and MMC based on clamped doublemodule [21]. In summary, the existing research only proposes the topology structure and control method for the fault current suppression of a single device, and the existing research stays in the theoretical research and prototype development stage, and lacks extensive engineering application. In view of the fault characteristics of the DC power distribution system, in order to ensure that the DC power distri- Fault Current Suppression Mechanism When the MMC is in the unblocked mode, the current flowing from the MMC to the fault point is unipolar. To limit this unipolar current, there are two main Journal of Power and Energy Engineering ways [22]: 1) Resistive current limiting: A large resistor is connected in the fault current path, and the peak value of the fault current is limited by a resistor, such as a resistive superconducting current limiter. 2) Inductive current limiting: A large inductor is connected in the fault current path, and the rising rate of the fault current is limited by the inductor, such as a saturated reactance type current limiter. There are two main ways to reduce this unipolar current to zero: 1) Insert an infinite resistance (i.e., the arrester) in the fault current path. At this time, regardless of the value of the equivalent voltage source of the MMC, the fault current will drop to zero, such as a mechanical DC circuit breaker or a hybrid DC circuit breaker. 2) A capacitor is connected in the fault current path, so that the fault path is converted into an LC oscillating circuit, then the unipolar fault current becomes an alternating current, and the fault current is cut off when the first zero crossing point is obtained. Such as MMC based on the full bridge sub-module. In summary, through the coordination of the DC current limiter and the DC circuit breaker, as shown in Figure 2, the fault current can be limited to a lower value, and then the lower fault current can be broke. This will reduce the difficulty of arc extinguishing and manufacturing of mechanical switches, reduce the dynamic overvoltage of the power electronic devices caused by breaking large current, and increase the interruption capacity. In the following section, the parameter collaborative configuration of the DC Circuit Breaker and the DC fault current limiter will be studied. Analysis of DC Breaking In this chapter, the mathematical model of the DC circuit breaker is established, and the characteristic parameters of DC breaking are extracted, and then the influence of different characteristic parameters of DC breaking on the breaking effect is studied. Technical Solution of DC Circuit Breaker In this paper, an infinite resistance is introduced into the fault current path to realize current breaking. The typical examples are mechanical DC circuit breaker and hybrid DC circuit breaker, as shown in Figure 3. For a mechanical DC circuit breaker, when a fault occurs, the mechanical switch K is opened. When the contact opening distance reaches a certain distance, the solid state switch S is turned on, and the resonant current generated by the LC oscillation branch is superimposed on the mechanical switch branch. Then the arc of the mechanical switch K is extinguished, the fault current is transferred to the LC oscillation branch; when the mechanical switch is opened and the gap can withstand the corresponding transient recovery voltage, the solid-state switch S is turned off; the voltage across the oscillation branch exceeds the action value of Metal Oxide Varistor (MOV), the fault current is transferred to the MOV branch, so that the MOV is connected in the fault path. For the hybrid DC circuit breaker, when a fault occurs, the power electronic switch T is turned on firstly, and then the fast mechanical switch K is open. When the contact opening distance reaches a certain distance, the control switch of the coupled negative voltage circuit is triggered, and the current of the mechanical switch branch is transferred to the power electronic switch branch, the arc of the mechanical switch is extinguished; when the contact gap can withstand the corresponding transient recovery voltage, the power electronic switch T is turned off; at this time, the overvoltage generated by the branch will exceed the action value of the MOV, so that the MOV is turned on, and the MOV is serially connected in the fault path. It can be seen from the above that the mechanical DC circuit breaker and the hybrid DC circuit breaker are only different in the current-commutation mode. After the DC circuit breaker completes the current-commutation, an infinite resistor (referred to as MOV) is connected in the fault current path. It is used to realize the breaking of the fault current and the absorption of the fault energy. Mathematical Model of DC Circuit Breaker When the DC circuit breaker completes the current-commutation and an infinite resistor is connected to the fault current path, the equivalent circuit model for the DC breaker to break the DC fault current is shown in Figure 4 where, MOV u depends on the characteristics of MOV. Here, the volt-ampere characteristics of MOV are expressed by two piecewise broken lines, and then Solving Equation (2) gives: Assume that dc 10 kV then the curves of dc i and MOV u as a function of time are shown in Figure 5. As seen from the above figure, when the power electronic switch T is turned off, the overvoltage generated by the branch will exceed the action value of the MOV, then the MOV is serially connected to the circuit, the DC fault current and the voltage across the MOV reach a maximum value, and then gradually decrease; after the current crosses zero, the voltage across the MOV is abruptly changed to a rated DC voltage. This is because when the fault occurs, the fault current is broken by the circuit breaker, and the MMC is not blocked. According to the mathematical model of DC breaking, that is, Equation (3), Mathematical Model of Collaborative Configuration When a short circuit fault occurs, a resistance-type current limiter is connected to the fault current path. Then, DC circuit breaker starts to break fault current. After the DC circuit breaker completing the current-commutation, an infinite resistor is connected to the fault current path. The equivalent circuit model for the DC breaking and current limiting is shown in Figure 6. Among, R is the equivalent resistance of the DC current limiter. Assume that the instant when a MOV of the DC is inserted into Assume that According to the mathematical model of collaborative configuration of DC circuit breaker and current limiter, that is, Equation (6), are characteristic parameters that affect the breaking characteristics. In the next chapter, we will use the univariate change analysis method to analyze the influence of the change of characteristic parameters on dc i and MOV u . 1) Influence of R on breaking characteristics For the mathematical model of collaborative configuration of DC circuit breaker and current limiter, the value of R is set to 1 Ω, 5 Ω, 10 Ω, 15 Ω and 20 Ω respectively, and other characteristic parameters are dc 10 kV U = , 5 mH L = , , dc0 5 kA i = , then the characteristic curve of dc i and MOV u with R can be obtained as shown in Figure 8. It can be seen from the above figure that t ∆ decreases with the increase of R, and the value of MOV_max u has nothing to do with R. 2) Influence of L on breaking characteristics For the mathematical model of collaborative configuration of DC circuit breaker and current limiter, the value of L is set to 2 mH, 4 mH, 6 mH, 8 mH and 10 mH respectively, and other characteristic parameters are dc 10 kV U = , Then the characteristic curve of dc i and MOV u with L can be obtained as shown in Figure 9. Then the characteristic curve of dc i and MOV u with R 0 can be shown in Figure 10. It can be seen that t ∆ decreases as R 0 increases, and the value of MOV_max u increases as R 0 increases, which is different from the influence of R on MOV_max u . Then the characteristic curve of dc i and MOV u with i dc0 can be obtained as shown in Figure 12. It can be seen from the above figure that i dc increases as i dc0 increases, and the value of MOV_max u increases as i dc0 increases. In summary, the fault characteristics of the system are greatly affected by the fault current (i dc0 ) and the MOV parameters (R 0 , K ref ). Principle of Parameter Collaborative Configuration Based on the sensitivity analysis of the characteristic parameters of the collaborative cooperation between the DC circuit breaker and the DC current limiter, the parameter configuration is shown in Table 1. "↑" stands for enlargement, "↓" stands for decrease, "→" stands for unchanged. It can be seen from the above table that, from the limit of t ∆ , the larger the equivalent resistance R of the DC current limiter, the shorter the time t ∆ , but the larger the R, the higher the manufacturing cost and volume will be, which Conclusions This paper studies the collaborative configuration of DC circuit breaker and DC current limiter for the DC distribution system. 1) The fault current suppression mechanism of the DC power distribution system is revealed. The current limiting method is mainly divided into the resistive current limiting and the inductive current limiting. The current breaking is mainly divided into an infinite resistance in the fault current path and a capacitor in the fault current path. 2) Based on the mathematical model of the DC circuit breaker, the characteristic parameters of DC breaking are extracted, namely L , 0 Then the univariate variation method is used to study the influence of characteristic parameter changes on breaking characteristics t ∆ and MOV_max u . 3) The mathematical model of the coordination process between the DC circuit breaker and the DC current limiter is established, and the characteristic parameters of the fault current collaborative suppression are extracted, namely L , i . The influence of characteristic parameters on the fault current suppression and breaking is obtained. The principle of parameters collaborative configuration of DC circuit breaker and DC current limiter is proposed. Through research, the collaborative suppression ability of DC circuit breaker and DC current limiter to fault current is fully explored to ensure the safe and reliable operation of DC power distribution system.
3,465
2019-09-17T00:00:00.000
[ "Physics", "Engineering" ]
One year VARC-2-defined clinical outcomes after transcatheter aortic valve implantation with the SAPIEN 3 Aims To evaluate 1-year outcome after transcatheter aortic valve implantation (TAVI) using the SAPIEN 3 (S3) prosthesis with emphasis on the composite endpoints “clinical efficacy after 30 days” and “time-related valve safety” proposed by the updated Valve Academic Research Consortium (VARC-2). Methods and results Four hundred and two consecutive patients undergoing transfemoral TAVI with the S3 were enrolled. Mean age was 81 ± 6 years, 43% were female and median logistic EuroSCORE I was 12% [8–19]. Device success was achieved in 93% (374/402) with moderate or severe paravalvular leakage (PVL) in 2%. At 1 year all-cause mortality was 8.9% [95% CI 6.4–12.2] and new permanent pacemaker implantation rate was 16% [95% CI 12.7–20.4]. The composite endpoint time-related valve safety occurred in 29% with structural valve deterioration, defined as elevated gradients or more than moderate PVL, occurring in 13%. The clinical efficacy endpoint after 30 days was observed in 37% of patients with the main contributor symptom worsening with New York Heart Association functional class III + in 17% of cases. Conclusions For the first time, VARC-2-defined composite endpoints at 1 year are reported and reveal a considerable proportion of patients experiencing the endpoint of time-related valve safety (29%) and clinical efficacy after 30 days (37%). Electronic supplementary material The online version of this article (10.1007/s00392-019-01461-7) contains supplementary material, which is available to authorized users. Introduction Transcatheter aortic valve implantation (TAVI) has revolutionized the treatment of symptomatic severe aortic stenosis in patients at intermediate or high risk for conventional surgical aortic valve replacement [1,2]. With increasing operator experience, improved patient selection but also continuous evolution of transcatheter heart valves (THV) and refinement of delivery systems a considerable improvement in outcome has been achieved with a reduction in 1-year mortality from 24% with older generation THV [3] to 12% with newer generations [2]. In the case of the latest generation balloon-expandable THV, the SAPIEN 3 (S3, Edwards Lifescience, Irvine, Ca) initial results from single centres have been promising [4,5]. Early clinical results of the Placement of Aortic Transcatheter Valves (PARTNER) II SAPIEN 3 trial have shown low 30-day mortality and low rates of stroke or paravalvular leakage (PVL) with the S3-THV [6]. Recently, longer follow-up of the PARTNER trial and the SOURCE 3 registry have become available and have Oliver Husser and Christian Hengstenberg share the last authorship. Electronic supplementary material The online version of this article (https ://doi.org/10.1007/s0039 2-019-01461 -7) contains supplementary material, which is available to authorized users. confirmed excellent clinical outcome up to 1 year [7,15]. However, the available 1-year data on this THV is limited by the fact that no study has evaluated outcomes according to the updated definitions proposed by the valvular academic research consortium (VARC-2) [9]. In these, important long-term composite endpoints regarding clinical efficacy and valve safety have been proposed. Therefore, we report 1-year outcome of a large cohort of patients treated with the S3-THV at a single centre using VARC-2 criteria and for the first time report the composite endpoints at 1 year. Patient population All patients undergoing transfemoral TAVI for severe native aortic valve stenosis with the S3-THV between January 2014 until November 2015 at the Department of Cardiology, Deutsches Herzzentrum München, Munich, Germany were included in the present analysis (n = 402). A multidisciplinary heart team assessed all cases taking into account the calculated perioperative risk scores as well as the patients' characteristics at the bedside and consensus regarding the therapeutic strategy was achieved. Written informed consent was obtained prior to procedure for all patients. The 30-day outcome of a subset of the present population has been published previously [4] and for the present analysis follow-up was extended and more patients were included. Echocardiography and multislice computed tomography (MSCT) data analysis MSCT was performed as part of the standard pre-procedural screening protocol. Aortic annulus measurements were assessed in multiple plane reconstructions as previously described [10]. Transthoracic echocardiography was performed before TAVI, before discharge and during follow-up at 30 days and 1 year. Data at discharge, 30 days and 1 year were available for 98.3%, 91.5% and 91.8% of surviving patients, respectively. Prosthesis size selection and procedure The technical features of the S3-THV have been described elsewhere [11]. At the time of the study, the S3-THV was available in 23, 26, and 29 mm sizes. The final decision on implanted prosthesis size was left at the discretion of the physicians performing the procedure based on MSCT measurements, calcification and annulus eccentricity. Postdilatation was performed in case of PVL II + or in case of prosthesis underexpansion. Definition of endpoints and follow-up All data up to 1 year were prospectively collected during routine ambulatory visits at our outpatients' clinic, by referring to the treating physician or other hospital documentation. Clinical endpoints were categorized using VARC-2 criteria [9]. In brief, device success was defined as absence of procedural mortality and correct positioning of a single prosthetic heart valve into the proper anatomical location and intended performance. The composite endpoint early safety at 30 days [all-cause mortality, stroke (disabling and non-disabling), life-threatening bleeding, acute kidney injury (RIFLE Stage 2 or 3 or renal replacement therapy), coronary artery obstruction requiring intervention, major vascular complication, valve-related dysfunction requiring repeat procedure] was evaluated. "Time-related valve safety" is composed of structural valve deterioration, prosthetic valve endocarditis or thrombosis, stroke and bleeding. "Clinical efficacy after 30 days" consists of all-cause mortality, disabling or non-disabling stroke, or hospitalizations for valve-related symptoms or worsening congestive heart failure (CHF). Additionally, two composite endpoints, death or readmission for heart failure and death or stroke were analyzed. Follow-up at 1 year was complete for 97.5% (392/402) and patients were censored at last event free contact. Statistical analysis Continuous variables are expressed as mean with the standard deviation or the median with the interquartile range. The VARC-2 composite endpoint was assessed as time-to-event rates as were each single contributor of the composite endpoint. Additionally, to allow for assessment of possible temporal changes in categories of New York Heart Association (NYHA) functional class, transvalvular gradients and PVL during follow-up, river plots were employed. Event rates were calculated as Kaplan-Meier estimates with the respective 95% confidence intervals. A two-sided p value of < 0.05 was considered statistically significant for all analyses. R (version 3.3.2) was used for all analyses. Clinical outcomes during 1 year after TAVI All-cause mortality at 30 days was 0.8% and increased to 8.9% at 1 year (Table 3; Fig. 1). At 30 days and 1 year, rate of readmission for CHF was 2.5% and 12.0%, respectively. The 1-year composite of all-cause death or readmission for CHF was 18% (Fig. 1a). Cumulative stroke rate at 1 year was 5% with 2% occurring within the first 30 days. The 1-year composite of all-cause death or stroke was 12% (Fig. 1b). The cumulative incidence of permanent pacemaker implantations (PPI) in pacemaker-naive patients was 12.8% at 30 days and increased to 16.2% at 1 year. Figure 2 shows the river plot of changes in NYHA categories. Overall, 52% and 55% of the patients were asymptomatic (NYHA I) at 30 days and 1 year, respectively. Within 1 year, 73% of patients improved at least in one functional class, 13% experienced no change and only 3% worsened. In 11.7% of cases, NYHA class at 1 year was not available due to death (8.5%) or was missing (3.2%). Echocardiographic follow-up Mean transaortic gradients before TAVI, at discharge and during follow-up are displayed in Online Resource 1, showing stable mean gradients around 12 mmHg. The proportion of patients with elevated gradients (≥ 20 mmHg) and moderate PVL and its course during follow-up is depicted in Fig. 3. Of patients with complete echocardiography at discharge and 30 days or with known mortality status (n = 364), PVL was moderate in 2% at discharge and 1% at 1 year. There was no patient with severe PVL (Fig. 3a). The proportion of patients with elevated gradients was 3.3%, 2.7% and 9% at discharge, 30 days and 1 year, respectively. Patients with elevated gradients at discharge had significantly smaller aortic annuli compared to those without elevated gradients (3.7 ± 0.5 vs. 4.8 ± 0.9 cm 2 ; p < 0.001), were more often female (84.6% vs. 41.6%; p = 0.002) and were all treated with the 23 mm prosthesis. Figure 3b shows a considerable increase in elevated gradients from 30 days to 1 year. VARC-2-defined composite endpoints The combined early safety endpoint at 30 days occurred in 13.7%. During the first year after TAVI, 29.4% experienced the time-related valve safety endpoint (Table 3). Figure 4a shows the individual contributors to this endpoint, the main contributor being structural valve deterioration, defined as elevated gradients (≥ 20 mmHg) or PVL II + with a cumulative incidence of 12.9% at 1 year. The clinical efficacy endpoint after 30 days was observed in 37.2% (Fig. 4b). The main contributor of this composite endpoint was symptom worsening (NYHA III/IV) with a cumulative incidence of 17.2%. Discussion In a contemporary population of TAVI patients who were treated in a single centre with the S3-THV, we found excellent results for 1-year mortality. For the first time, we report VARC-2-defined composite endpoints at 1 year, namely "clinical efficacy after 30 days" and "time-related valve safety". The VARC endpoint definitions-1-year on the SAPIEN 3 transcatheter heart valve The updated Valve Academic Research Consortium criteria provide a standardized framework for evaluation and comparison of clinical outcomes after TAVI [9]. Although the adoption of VARC criteria has been increasing over time, a considerable number of publications does not report outcomes according to VARC [12,13]. Indeed, even in recent pivotal TAVI trials, while applying VARC-2 criteria for reporting in-hospital outcomes, important composite endpoints such as device success and early safety at 30 days are not reported [6][7][8]. The S3-THV is widely used, however, relatively little data on 1-year results is available and no data at all is available on the VARC-2 composite endpoints. The majority of data come from the PARTNER II trial [6], in which 952 patients treated transfemorally from the intermediaterisk population presented an all-cause mortality of 12.3% and the combined rate of all-cause mortality and stroke was 17.2% at 1 year [14]. Recently, 1-year data from the SOURCE 3 and the Israeli TAVR registry showed even lower all-cause mortality rates at 1 year ranging from 8.5 to 12.6% and a stroke rate of 3.1% [15,16]. A recent subgroup analysis of the SOURCE 3 registry showed mortality rates of 9.3% in patients aged 75-80 years [17], while Eichler et al. [18] presented all-cause mortality rates of 13.8% at 1 year. Our results from 402 patients are comparable favourably to this recent 1-year data with an allcause mortality of 8.9% and stroke rate of 5.0%. Very recently, results from randomized trials of a lowrisk TAVI population have been published in the New England Journal of Medicine, showing even lower 1-year mortality rates ranging from 1-2.4% [19,20]. These promising results further strengthen the positive results of TAVI and encourage to a further expansion to a younger and low-risk population. VARC-2 composite endpoints The composite endpoint "device success" is an important measure of acute procedural success and few studies have assessed this using the S3-THV. Our group has previously published 30-day outcomes using the S3-THV [4]. In this extended analysis with 1-year follow-up and a significant increase in sample size, we were able to show stable rates of device success (93% vs. 97.6%) and early safety at 30 days (13.7% vs. 10%). As far as clinical efficacy after 30 days and time-related valve safety is concerned, little data are available in the current literature and with other THV [21]. In the present study, we found a relatively high incidence of these endpoints mostly driven by symptomatic heart failure (NYHA class III/IV) or valve-related dysfunction with elevated gradients. Frequently, clinical conclusions are drawn from comparison of summary data. From those, it is almost impossible to follow the development of certain parameters. Here, we created river plots for NYHA class and echocardiographic parameters to better understand the effects of TAVI on both, the individual and the population. Using a river plot-based analysis, we observed a dynamic change in elevated gradients calling into question the clinical significance of this finding. Although the mean of mean pressure gradients was low throughout the first year (12 mmHg), a considerable proportion of patients (9%) exhibited elevated gradients at 1 year. Other groups have reported even higher rates of patient-prosthesis mismatch (24%) mostly due to elevated gradients with the S3-THV [22]. In this analysis, patients experiencing elevated gradients displayed no significant difference in outcome in terms of mortality, stroke rates or worsening of symptoms. A previous study on surgical aortic valve replacement suggested higher rates of re-intervention in patients with elevated gradients, especially in younger patients [23]. Moving towards a younger TAVI population, assessing the impact of elevated gradients on valve durability is of the utmost importance and future studies in large populations with extended follow-up are warranted to fully assess the significance of this finding. Recently, it has become evident that not only valve deterioration but also valve thrombosis does contribute to elevated gradients [24]. Subclinical leaflet thrombosis, a phenomenon relatively recently recognized in the field of TAVI [25], may be a possible explanation for the considerable dynamic in the rate of elevated gradients. In the current analysis, we detected only three cases of valve thrombosis; however, this population treated from 2014 to 2015 was not routinely screened for valve thrombosis with serial examinations by CT or transesophageal echocardiography. Hence, the incidence of valve thrombosis may be underestimated and cannot be excluded as temporary or longer lasting cause of elevated gradients. New permanent pacemaker implantations Cardiac conduction disturbances leading to PPI are a frequent and important complication after TAVI. Although earlier investigations found no negative effect of new PPI on outcome [26], recent data have identified chronic pacing as independent predictor of 1-year mortality after TAVI and as an important cause of prolonged hospital stay [27]. In the case of the S3-THV, first systematic data on PPI showed incidences of 13% until up to 25.5% [28][29][30]. This led to several investigations examining in more detail the potential underlying mechanisms and demonstrated PPI rates of 11.6% [14], 13.1% [6], and 16% [10] at 30 days in pacemaker-naive patients. Multiple factors have been described to predict PPI following TAVI, especially a previous right bundle branch block [31][32][33]. In an extended meta-analysis of PPI following TAVI, Siontis et al. [34] categorized these factors into patient-related, electrocardiographic and procedural factors. While the former two categories cannot be influenced by the operator's choices or skills, device-related factors may be influenced by sizing strategies, implantation technique and implantation depth. Development of novel devices should particularly address these "modifiable" features to allow for less need of PPI after TAVI. Limitations This is an observational study from a single centre without centre-independent adjudication of postprocedural results and lack of independent echocardiographic core lab assessment. Clinical benefit was assessed by NYHA functional class and may be patients' subjective perception. Conclusions The present study assesses 1-year outcomes with the S3-THV according to VARC-2-defined endpoints with low rates of mortality and stroke at 1 year. For the first time, VARC-2-defined composite endpoints at 1 year are reported and reveal a considerable proportion of patients experiencing the composite endpoint of time-related valve safety (29%) and clinical efficacy after 30 days (37%). The main contributor to these combined endpoints was elevated gradients. Further research is warranted to reveal the underlying mechanisms behind this observation.
3,624.6
2019-05-02T00:00:00.000
[ "Medicine", "Engineering" ]
Hospitalized Patients with Medically Unexplained Physical Symptoms: Clinical Context and Economic Costs of Healthcare Management Medically Unexplained Physical Symptoms (MUPS) are physical symptoms without a medical explanation. This study collected data from hospitalized patients presenting MUPS, aiming to draw a clinical and socio-demographic profile of patients with MUPS, to explore psychopathological correlations of Somatic Symptoms Disorder (SSD) diagnosis, and to estimate economic costs related to hospital management for MUPS. The cross-sectional study consisted in the evaluation of data referring to hospitalized patients admitted between 2008 and 2018 in a teaching hospital in Northern Italy. A total of 273 patients presenting MUPS have been hospitalized. The sample showed a prevalence of female, married and employed patients. The most frequent wards involved are Neurology, Internal Medicine and Short Unit Stay. The most common symptoms found are headache, pain, syncope and vertigo. There is no evidence that a history of medical disease is associated with a diagnosis of SSD. A personality disorder diagnosis in patients with MUPS was associated with increased probability of having a diagnosis of SSD. A marginally significant positive association emerged with anxiety disorders, but not with depressive disorder. The overall estimated cost of hospitalization for patients with MUPS is 475′409.73 €. The study provides the investigation of a large number of patients with MUPS and a financial estimate of related hospitalization costs. Introduction Medically unexplained physical symptoms (MUPS) are physical symptoms without a medical explanation. This definition is used to imply somatic symptoms that cannot or have not been sufficiently explained by organic cause after a thorough physical, laboratory and instrumental examination [1]. The persistence of distressing physical symptoms is linked to a huge individual and societal burden and unmet clinical need [2]. MUPS are related with high levels of psychological distress and can lead to an important functional impairment, interfering with work productivity and daily functioning. An association with a high utilization of the healthcare resources and elevated costs are shown in the professional literature [3]. In the official classifications of the Diagnostic and Statistical Manual of Mental Disorder IV-text revision (DSM IV-TR) and International Classification of Diseases 10 th revision (ICD-10), the presence of medically unexplained symptoms was a criterion to fulfill the diagnosis of somatoform disorder. This diagnosis was introduced for the first time in DSM-III [4] and in ICD-10 [5], to try to create a new group that was useful to collect all physical symptoms in which no organic cause was demonstrable. In DSM-5 [6], the nature of the physical symptoms is no longer a criterion for somatoform disorders. In fact, DSM-5 focuses on the way a patient emotionally, cognitively and behaviorally copes with the physical symptoms. According to the Somatic Symptoms Disorder (SSD) classification, even if a patient is suffering from chronic medical conditions, they can also be diagnosed with SSD and receive treatment [2]. The previous classifications were considered difficult to use in clinical practice, especially among general practitioners and non-specialists, because of their rigid categories [7]. On the other hand, in DSM-5, the somatic symptom and related disorders chapter has a limited clinical utility and presents some ambiguity [8][9][10]. This diagnostic classification reduces the importance of medically unexplained symptoms and emphasizes the psychological criteria and the functional impairment experimented by the patient. Furthermore, in epidemiological studies, those which were based on DSM criteria for somatoform disorder resulted in low prevalence of this disease, differently from what we observe in the clinical practice [11,12]. In the opinion of many authors, this gap is due to the fact that the diagnostic criteria do not correspond to reality [13]. In 2004 [14], a systematic review of all epidemiologic studies collected 47 papers in the general population and general medicine. It is interesting to note that using standard criteria for somatization disorder, the mean prevalence was 0.4% in the general population and using reduced criteria, such as Somatic Symptom Index (SSI) [11], the results ranged from 4.4% to 19%. It is also interesting to note that in the prevalence studies there is a wide range of prevalence, which often depends on the sample analyzed, for example, in a Dutch study published in 2004, the prevalence of somatoform disorders in general practice was 16.1% [15]. If MUPS are considered not as a feature of a specific disorder but as a health problem itself, a high prevalence of these problems can be noted. Up to one-third of all people presenting with physical symptoms have MUPS [16], but also within the MUPS category, the studies showed wide heterogeneity in terms of the prevalence rates [17]. MUPS are frequently associated with the female gender [18,19] and low socio-economic status [20]. The mean age in which MUPS are more frequent varies between different studies [21]. MUPS are often associated with psychiatric disorders, with a considerable degree of diagnostic overlap with depression, anxiety and panic disorder and substance abuse [3], nevertheless these patients are seen by a psychiatrist very late in their history of disease. MUPS are the most commonly found symptoms in primary care and they often occur even in organic pathology [3,22]. They also have a high prevalence across secondary care settings and they are responsible for a huge proportion of disability and decreased quality of life among the general population [23]. These patients represent an important clinical phenomenon with considerable direct and indirect economic consequences. In the USA and in the UK, several studies have attempted to calculate either the aggregate or individual cost of conditions associated with somatization, highlighting different estimates [23,24]. Previous studies on somatic symptoms disorder support the evidence for an unfavorable outcome of conditions involving persistent functional somatic symptoms, but these studies are mainly based on self-report questionnaires and/or less well-defined diagnostic constructs [25]. As far as we know, there are few studies on medically unexplained symptoms in patients admitted to hospital in the scientific literature. Moreover, correlations of somatic symptoms and associations with clinical variables are often unclear and must be discussed. Thus, the present study provided for the collection of data from hospitalized patients presenting medically unexplained physical symptoms (MUPS) referring to different hospital wards, aiming at the following outcomes: (1) to draw a clinical and socio-demographic profile of hospitalized patients with MUPS; (2) to explore psychopathological correlations of SSD diagnosis; (3) to estimate economic costs related to healthcare utilization of MUPS. Materials and Methods The cross-sectional study consisted of the evaluation of data referring to all hospitalized patients admitted between 2008 and 2018 in the wards of a teaching hospital in Northern Italy (Deliberate n. VIII/4221, 28 February 2007). The research involved the Internal Medicine, Neurology, Infectious Disease, Orthopedics, Otorhinolaryngology and Emergency wards; Short Stay Unit data were available from 2014, Emergency and Transplant Surgery data from 2015 and Psychiatry data from 2012. Data from the Short Stay Unit and Emergency and Transplant Surgery were available from the year these wards were opened. Data from the Psychiatry ward were computerized from 2012. Emergency ward data collected referred to the period from November 2017 to November 2018. All data were recruited between January 2018 and January 2019. Hospital discharge letters were analyzed by three psychiatry section clinicians from the hospital software. The clinicians were not directly involved in analyzed patients' diagnosis and treatment. Data from patients fulfilled the following inclusion criteria: age > 18; be an inpatient in the teaching hospital; present symptoms with apparently no medical cause, or whose cause remains unclear (Medically Unexplained Physical Symptoms); have a diagnosis of Somatoform Disorder or Somatic Symptoms Disorder and related disorders by non-specialists (according to DSM-IV-TR and DSM-5; since Italian statistical medical recording is ICD, diagnoses have been made through the ICD code conversion Table); present all test clear. No excluding criteria were used. The following socio-demographic and clinical variables were evaluated: gender, age, marital status, employment, diagnosis or diagnostic hypothesis in admission and discharge, personal medical history, presence of previous or concurrent psychiatric comorbidities, length of hospitalization, healthcare costs, medical examinations, psychiatric evaluation, pharmacological treatment. The economic costs of each hospitalization were obtained from the economic value sheet combined with the discharge letter uploaded on the electronic register of the hospital. When unavailable, the average costs of hospitalization for each patient were estimated by the Management control division of the hospital. The costs of laboratory and instrumental examinations were found on the document "Nomenclature tariff of the specialist outcare patient" (Ministerial Decree 216, 12 January 2017) DPCM 2017) of the Italian National Health System. All patients provided a general written informed consent to the processing of personal data as part of the routine quality check processes. Patients' data were made anonymous, obscuring sensitive information used in the research to protect the recognizability of the patients, according to the Italian legislation (D.L. 196/2003, art. 110-24 July 2008, art. 13). The Provincial Health Ethical Review Board (Ethics Committee of Insubrias-Varese, Italy) was consulted prior to the beginning of the study; it confirmed that, as the research was a cross-sectional retrospective study, it did not need authorization from the Board. The study was carried out in accordance with the ethical principles of the Declaration of Helsinki (with amendments) and Good Clinical Practice. To summarize epidemiological and clinical characteristics, descriptive statistics (which include means, standard deviation and demographic variables percentages) were computed. To better detect the clinical and socio-demographic characteristics of the patients, hospital wards were grouped into different macro-areas: Medical wards: Internal Medicine, Neurology, Infectious Disease, Short Stay Unit; Surgical Wards: Emergency and Transplant Surgery, Orthopedics; Emergency Ward; Psychiatry; Otorhinolaryngology. Statistical analyses were performed on data from medical specialties, including surgical wards, Psychiatry and Audiovestibology. Emergency ward data were not computed because of the lack of patients' personal information. Analyses were conducted to investigate specific issues regarding the probability of having a diagnosis of somatic symptoms disorder in our sample of patients with MUPS. In particular, chi-square tests (χ 2 ) were used to investigate whether there were differences in the distribution of the diagnosis of somatic symptoms disorder in the two genders, as well as in the diverse conditions of civil status and employment. Two multiple logistic regression models were used to evaluate whether a series of medical and psychiatric conditions were associated with an increased probability of having a somatic symptoms disorder diagnosis. In particular, we tested a model with medical diseases as independent variables (including previous medical history, neurological anamnesis, fibromyalgia, neoplasms, metabolic diseases, autoimmune diseases, endocrinological diseases, infective diseases, medical diseases, surgery, and accidents), and a second model with psychiatric disorders as independent variables (Depressive Disorder, Anxiety Disorder, Personality Disorder). In both models, all independent variables were dichotomic categorical variables, with a value of 0 indicating no pathology in anamnesis, and a value of 1 indicating the presence of pathology. All analyses were conducted through the software IBM®SPSS®Statistics version 25.0 (IBM Corp., Armonk, NY, USA) was used [26]. Socio-Demographics and Clinics Socio-demographics and clinical characteristics of the sample are showed in Table 1. The overall number of hospitalizations that were detected was 306. We calculated the total number of patients with MUPS considering that three patients had more hospitalizations in the research period. The distribution of patients in different wards is shown in Table 2. The prevalence of patients with MUPS is shown in the same table, considering the percentage of people hospitalized more than once was under 10%. The average length of hospitalization in different wards was the following: Medical Wards (7 days); Surgical Wards (5 days); Psychiatry (8 days); Audiovestibology (7 days). As shown in Table 3, 46% of the sample (n = 126) patients present no psychopathological comorbidities, of which 65.8% (n = 83) are women and 34.1% (n = 43) are men. The diagnosis of somatoform disorder was formulated in 7.9% of cases, in 5% of cases the diagnosis was in comorbidity with other psychiatric disorders; in 2.9% of cases without comorbidities. A psychiatric consultation was requested in 75 admissions and a psychopharmacological treatment was set in 157 cases; in 52 cases, the therapy was prescribed by a psychiatrist. Not including the number of hospitalizations in psychiatry, 138 (50.5%) patients did not receive any psychiatric treatment. The pharmacological treatment consisted of benzodiazepines (10.5%) and Selective Serotonin Reuptake Inhibitors (9.5%), in 30.4%, the treatment consisted of combinations of different classes of drugs. A total of 6291 admissions to the Emergency Ward were observed in patients with MUPS; this sample is composed by 5735 subjects, 55% of the sample are women (n = 3142), 45% are men (n = 2590). The average age of the sample is 52 years. A total of 6005 patients were discharged, 20 patients were sent to outpatient clinic, 243 patients left the emergency ward before concluding the exams, 30 patients refused hospitalization, and two patients were transferred to another hospital. The most frequent symptoms that determined admission were the following: abdominal pain (18.9%; n = 1191).; non-specific chest pain (18.7%; n = 1175); lower back pain (12.3%; n = 775); headache (9%; n = 571). Evolution of the Diagnostic Criteria from Somatoform Disorder (DSM-IV-TR) to SSD (DSM-5) A total of 32 patients (19 women and 13 men) of the total sample who did not receive a diagnosis of somatoform disorder, fulfill the diagnostic criteria of DSM-5 Somatic Symptoms Disorder, based on the discharge letter. A total of 6 patients had a psychiatric consultation during hospitalization. A total of 16 patients had a previous psychiatric diagnosis (Anxiety Disorder n = 10; Depressive Disorder n = 4; Substance Abuse (n = 1); Anxiety Disorder/Eating Disorder n = 1), seven patients received a psychiatric diagnosis upon discharge (Anxiety Disorder n = 4; Depressive Disorder n = 2; Personality Disorder n = 1) and nine patients had no previous psychiatric diagnosis and they did not receive a psychiatric diagnosis upon discharge. Logistic regression models are presented in Table 5. The table includes Odds Ratios (OR), indicating the increase in the probability of occurrence of the SSD diagnosis, and their corresponding Confidence Intervals (CI) and p-values. CIs including the value of 1 indicate no significant relationship. Standard Errors (SE) associated with the coefficient and Wald χ 2 are also reported. The Wald χ 2 tests the null hypothesis that there is no association: if significant, the probability of occurrence of the SSD diagnosis is significantly associated with the corresponding predictor. As can be seen, the model including medical diagnoses as independent variables indicated that the presence of a neurological disease in medical history was negatively associated with the presence of a diagnosis of somatic symptom disorder (OR = 0.34; Wald χ 2 (1) = 4.75, p = 0.03). However, it has to be noted that the overall model was not significant (χ 2 (11) = 17.96; p = 0.08; Negelkerke R 2 = 0.13), meaning that medical diseases did not explain a significant percentage of variance in the dependent variable. Given this, we computed a Phi-correlation coefficient among neurological anamnesis only and somatic symptom disorder diagnosis to further explore this association: the correlation was negative and significant (φ = −0.13; p = 0.03). A logistic regression model including psychiatric diagnoses as independent variables was significant (χ 2 (3) = 12.16; p < 0.01; Negelkerke R 2 = 0.10). The model correctly classified 92.7% of participants, and indicated that a personality disorder diagnosis in patients with MUPS was associated with increased probability of having a diagnosis of Somatic Symptoms Disorder (OR = 16.18; Wald χ 2 (1) = 8.26, p < 0.01). A marginally significant positive association (p = 0.06) also emerged with anxiety disorder but not with depressive disorder. Table 6 shows the overall estimated cost of hospitalizations for patients with MUPS and the costs divided by the hospital wards. The total amount is 475,409.73 € with an average cost per year of 47,540.973 €. The highest costs were observed in medical wards, such as Neurology (328,192.09€) followed by Internal Medicine (147,976.16€). The overall estimated cost of examinations, which include blood tests and instrumental examinations, is 119,926.34 €. The overall estimated cost of hospitalizations in surgical wards is 14,495.14 €. Discussion The study was carried out in a secondary setting. Clinical and diagnostic features of somatoform disorder have been debated by authors over the years, without reaching a consensus on which one could be the best and more useful diagnostic classifications. As in previous studies, MUPS were chosen as the basic diagnostic feature to the first selection of the patients [17,25,27,28]. MUPS still remain the main feature of all the diagnostic labels proposed (official ones and alternative ones), except for Somatic Symptom Disorder (according to DSM-5, APA 2013). This section was introduced in order to change the diagnostic paradigm and facilitate the diagnosis, especially for non-specialists [6,29]. In this study, it emerges that a diagnosis of SSD seems more inclusive than diagnosis of somatoform disorder, with 32 patients (11.7%) fulfilling the diagnostic criteria of SSD, which is more than those who received a diagnosis of somatoform disorders (7.9%). This difference, retrospectively observed, could be partly due to a bias linked to the study design since it was not always possible to deduce the way patients present and perceive their symptoms from the discharge letter. The present study confirms the gender trend observed in another primary care study [19,21,25] with a high prevalence of females with MUPS. Although this prevalence emerged, no statistically significant correlation between the female gender and SSD was detected. The average age of hospitalized patients with MUPS is 45 years, with a prevalence of married and employed people, contrary to what is observed in the literature [30,31]. This result could be influenced by the lack of almost 20-28% of patients' information. The study highlights a relevant comorbidity of MUPS with other psychiatric disorders (39% in previous medical history and in 16% as a new psychiatric diagnosis). Consistent with previous studies [3,15,23], the most frequently detected disorders were Anxiety Disorder, Depressive Disorder and Substance Abuse. A psychiatric consultation was requested for 75 admissions in 306 hospitalizations; this result is in line with a previous study in outpatients [32]. The discrepancy between the admissions for medically unexplained symptoms and request of specialist consultation could lead to a misdiagnosis or to a treatment proposal not in line with management guidelines of MUPS [33]. The most prescribed treatments were SSRIs and benzodiazepines. In the literature, it emerged that SSRIs are preferred alone or in combination with antipsychotics [34][35][36]. This result is consistent with what emerged in evidence-based literature. In a recent meta-analysis, it emerged that the new generation of antidepressants have very low-quality evidence regarding their effectiveness, even if their effectiveness is balanced against high rates of adverse effects [3]. No data are available for benzodiazepines, but German guidelines for somatoform disorder discourage the use of anti-anxiety medications, especially in elderly people [37][38][39]. We could not evaluate the eventual efficacy of any type of psychotherapy that presents some evidence of being effective [40,41] because this information was not available in the patients' discharge letters. Regarding the data on the wards involved in the presentation of MUPS and the most common symptoms presented by the patients, these data differ from the literature, especially concerning Internal Medicine or Primary Care. For example, Kroenke and Mangelsdorff conducted a longitudinal study on the common symptoms in an internal medical setting, highlighting that the most frequent symptoms were chest pain, fatigue and dizziness [42]. This difference could be due to the large number of neurologic patients in our sample, although if we consider the subgroup of patients referring to the Emergency Ward, lower back pain, non-specific chest pain, headache and abdominal pain formed the most common symptomatology. With regard to the correlation between medical anamnesis and SSD, there is no evidence that a history of medical disease is associated with a diagnosis of SSD. In other words, patients with MUPS and a neurological diagnosis in medical history may be less likely to receive a somatic symptom disorder diagnosis compared to patients with MUPS and no neurological diagnosis in anamnesis, although further study is necessary to confirm this datum. It is possible to assume that having already received a diagnostic label of a previous neurological disorder, patients are subsequently not diagnosed with appropriate codification of MUPS [43]. From our analyses, a Personality Disorder diagnosis in patients with MUPS was associated with increased probability of having a diagnosis of Somatic Symptoms Disorder. A marginally significant positive association also emerged with Anxiety Disorder, but not with Depressive Disorder. This interesting result highlights the impact of the previous diagnoses on formulating a diagnosis of SSD in patients presenting MUPS. Further investigations are needed to understand those psychopathological correlations. From our cost analysis, the neurology ward had the highest overall healthcare expenditure, including the highest cost for laboratory and instrumental exams. This observation could due to the type of examinations, which are predominantly procedures associated with huge healthcare costs. It is interesting to note that psychiatric hospitalization costs incur higher costs than those related to emergency surgery and infectious disease. This could be due to the long hospitalization durations in psychiatry and because patients in emergency surgery did not receive any surgery after clean investigations. With regard to patients admitted in infectious disease, hospitalizations were shorter than in psychiatry and any medications received were not expensive. As shown in Table 6, the ratio between costs for MUPS in hospitalized patients and overall costs related to hospitalizations for each ward is higher in Neurology (1.9%) than other specialties. This is in line with the prevalence of clinical presentation, as already described in the text. As widely described in the literature, this could be used as a guide to reduce any repetitive investigations and to evaluate the need of a psychiatric consultation early. In fact, psychiatric consultation has been identified as a way to support and implement the diagnostic process in order to reach an earlier person-centered psychiatric intervention, while also evaluating personal resources [44][45][46][47]. The present study takes into consideration the costs related to part of the diagnostic process, raising the hypothesis that total healthcare costs for patients with MUPS are even more extensive [43]. As shown in the professional literature, this may only be the tip of the iceberg [25] and it represents the reason why it was not possible to compare our data with healthcare costs derived from previous American and European studies in the professional literature [23,25]. As far as we know, few studies on patients with medically unexplained symptoms admitted to hospital exist in the professional literature. The strengths of the present study consist in the investigation of a large number of patients with MUPS; to study clinical, socio-demographic variables and psychopathological correlations involved in the development of Somatic Symptom Disorder; to provide a financial economic estimate of hospitalization costs of patients with MUPS. The study presents some limitations, such as the small sample size from non-medical specialties, limiting the possibility to extend the statistical analyses to the whole sample due to the lack of patients' personal information. Further investigations of this research project could possibly extend the study in other areas, such as General Practice and to extend the research to clinics and outcare patient facilities.
5,374.6
2019-07-01T00:00:00.000
[ "Medicine", "Economics" ]
LHC Run-I constraint on the mass of doubly charged Higgs bosons in the same-sign diboson decay scenario In this Letter, we study the latest bound on the mass of doubly charged Higgs bosons, $H^{\pm\pm}$, assuming that they dominantly decay into a diboson. The new bound is obtained by comparing the inclusive searches for events with a same-sign dilepton by the ATLAS Collaboration using the latest 20.3 fb$^{-1}$ data at the LHC 8 TeV run with theoretical prediction based on the Higgs triplet model with next-to-leading order QCD corrections. We find that the lower mass bound on $H^{\pm\pm}$ is about 84 GeV. I. INTRODUCTION Recently, the ATLAS Collaboration has released the new results for the inclusive searches for events with a samesign dilepton by using the 20.3 fb −1 data at the 8 TeV run of the LHC [1]. They improve the previous results based on the 4.7 fb −1 data at the 7 TeV run [2]. From non-observation of any excess from the standard model (SM) background, upper limits at the 95% confidence level (CL) on the fiducial cross section have been obtained for inclusive production of the same-sign dilepton from the non-SM contribution. One of the most interesting applications of these results is to obtain a constraint on the parameter space for physics related to doubly charged Higgs bosons H ±± . In various exotic models beyond the SM, H ±± are predicted its existence, e.g., in the left-right symmetric model [3], in models with the type-II seesaw mechanism [4], and in neutrino mass models via quantum effects [5]. In this Letter, we focus on H ±± in the Higgs triplet model (HTM) [4], where the Higgs sector is composed of an isospin doublet Higgs field with the hypercharge Y = 1/2 and a triplet field with Y = 1. In the HTM, two decay modes are allowed for H ±± , i.e., decays into the same-sign dilepton and the same-sign diboson. 1 If the same-sign dilepton decay is dominant, the most stringent lower limit on the mass of H ±± (m H ±± ) has been obtained to be about 550 GeV [1] at the LHC. On the other hand, searches for H ±± in the same-sign diboson decay mode are of distinct importance. The detection of interactions to weak gauge bosons can probe that H ±± come from Higgs fields with a non-trivial isospin charge. When the same-sign diboson decay is dominant, the mass bound given in the above is no longer applied. In our previous publications [7,8], we have performed analyses to obtain the mass bound of H ±± in the diboson decay scenario in the HTM. By using the results on the inclusive searches for events with a same-sign dilepton at the LHC [2], the obtained mass bound of H ±± has been m H ±± 60 GeV [8]. In this Letter, we update our analysis based on the new data in Ref. [1], and revise the mass bound of H ±± in the diboson decay scenario. II. ATLAS NEW RESULTS AT THE 8 TEV RUN In Ref. [1], the inclusive searches for events with a same-sign dilepton have been performed by the ATLAS Collaboration by using the full data set at the 8 TeV run of the LHC. Events which contain a same-sign dilepton have been collected with the selection cuts of (i) p T > 25 GeV for the leading transverse momentum (p T ) lepton, (ii) p T > 20 GeV for the sub-leading p T lepton, (iii) |η| < 2.5 for both leptons where η represents the pseudorapidity and (iv) an invariant mass cut of M ℓℓ > 15 GeV. To reduce background from Z boson decays, (v) events with an opposite-sign same-flavor dilepton whose invariant mass satisfies |M ℓℓ − m Z | < 10 GeV are rejected. In addition, in the e ± e ± channel, (vi) events with a same-sign dielectron in the mass range between 70 GeV and 110 GeV are vetoed to use events in this region as a control sample to estimate the SM background. Total numbers of the collected events and invariant mass distributions are in good agreement with the prediction by the SM, and therefore upper limits on the cross section from the non-SM contribution are obtained for the fiducial region defined above. III. LIMIT ON H ±± IN THE DIBOSON DECAY SCENARIO The experimental limits on the fiducial cross section can be compared with the theoretical prediction calculated as where σ tot ·B is (sum of) the total cross section times branching ratio for the process giving the same-sign dilepton signal from the new physics model, and ǫ A is the factor of efficiencies of the acceptance and kinematical cuts. We evaluate the fiducial cross section for the process with the same-sign dimuon, µ ± µ ± , in the final state via H ±± → W ( * )± W ( * )± in the HTM. The other channels, such as e ± e ± and e ± µ ± turn out to give weaker bounds than the µ ± µ ± channel. In the following discussion, we assume that the branching ratio of the diboson decay mode is 100%. 2 The branching ratio for the H ±± → W ( * )± W ( * )± → µ ± µ ± νν channel is explained in details in Ref. [8]. The dominant production processes of H ±± at the LHC are (a) pp → H ++ H −− , (b) pp → H ++ H − , and (c) pp → H + H −− , where H ± are the singly charged Higgs bosons which is also introduced in the HTM. The total cross sections for these processes have been calculated up to the next-to-leading order (NLO) in perturbative QCD [10]. Numerical predictions at the LHC with various collision energies can be found in Ref. [8]. We assume that the mass of H ± is the same as that of H ±± for simplicity. In this Letter, efficiencies for the acceptance and kinematical cuts are estimated by using MadGraph5 [11] for each production process at the parton level in the leading order. Because we consider only inclusive production of a pair of same-sign muons, and do not count the other particles, the cuts (v) and (vi) explained in the last section are omitted. In Table I, we summarize the total cross sections, branching ratio and the efficiencies for m H ±± = 50 GeV to 100 GeV. By combining them, the fiducial cross section for the inclusive µ ± µ ± production is calculated as where σ, ǫ are the total cross sections, efficiencies for the processes (a), (b) and (c), respectively, and B µµ = B(H ±± → µ ± µ ± νν). The results for the fiducial cross sections are also summarized in Table I. [8] for H ++ H −− , H ++ H − and H + H −− processes, branching ratio of H ±± into a same-sign dimuon [8], and the acceptance and cut efficiencies for µ ± µ ± searches at the LHC with 8 TeV for m H ±± = 50 GeV to 100 GeV. Efficiencies include acceptance cuts for pT , η of muons, and the invariant mass cut Mµµ > 15 GeV. The resulting fiducial cross section is also listed. Now, we are ready to compare the fiducial cross sections for the inclusive µ ± µ ± production via the diboson decay of H ±± at the LHC. In Fig. 1 distribution functions [12]. The green dashed horizontal line shows the 95% CL upper limit obtained by the ATLAS Collaboration, By comparing them, we find that doubly charged Higgs bosons with m H ±± 84 GeV are excluded in the diboson decay scenario. For the reference, the experimental limits for the other decay channels are reported as σ fid 95 (e ± e ± , M ee > 15 GeV) = 32 [fb] and σ fid 95 (e ± µ ± , M eµ > 15 GeV) = 29 [fb] [1], while theoretical estimations for these channels are comparable with the µ ± µ ± channel in the mass range of m H ±± 90 GeV [8]. Thus, the limits by these channels are negligible. IV. CONCLUSION We have studied the latest mass bound on the doubly charged Higgs bosons in the diboson decay scenario in the HTM. The new limit has been obtained by comparing the inclusive searches of events with a same-sign dilepton by the ATLAS Collaboration using the latest 20.3 fb −1 data set at the LHC 8 TeV run [1] with theoretical prediction which includes the production cross section with NLO QCD corrections, branching ratio with interference effects, and efficiencies for the acceptance and kinematical cuts [8]. The lower bound has been revised to be m H ±± 84 GeV.
2,077.6
2014-12-24T00:00:00.000
[ "Physics" ]
Theoretical modeling and experimental demonstration of Raman probe induced spectral dip for realizing a superluminal laser We have demonstrated experimentally a Diode-Pumped Alkali Laser (DPAL) with a Raman resonance induced dip in the center of the gain profile, in order to produce an anomalous dispersion, necessary for making the laser superluminal. Numerical calculations match closely with experimental results, and indicate that the laser is operating superluminally, with the group index far below unity (~0.00526) at the center of the dip. The estimated factor of enhancement in the sensitivity to cavity length perturbation is ~190, approximately equaling the inverse of the group index. This enhancement factor can be made much higher via optimal tuning of parameters. Such a laser has the potential to advance significantly the field of high-precision metrology, with applications such as vibrometry, accelerometry, and rotation sensing. © 2016 Optical Society of America OCIS codes: (020.1670) Coherent optical effects; (120.3180) Interferometry; (140.3370) Laser gyroscopes; (190.5650) Raman effect. References and links 1. M. S. Shahriar, G. S. Pati, R. Tripathi, V. Gopal, M. Messall, and K. Salit, “Ultrahigh enhancement in absolute and relative rotation sensing using fast and slow light,” Phys. Rev. A 75(5), 053807 (2007). 2. D. D. Smith, H. Chang, L. Arissian, and J. C. Diels, “Dispersion-enhanced laser gyroscope,” Phys. Rev. A 78(5), 053824 (2008). 3. G. S. Pati, M. Salit, K. Salit, and M. S. Shahriar, “Demonstration of displacement-measurement-sensitivity proportional to inverse group index of intra-cavity medium in a ring resonator,” Opt. Commun. 281(19), 4931– 4935 (2008). 4. D. D. Smith, K. Myneni, J. A. Odutola, and J. C. Diels, “Enhanced sensitivity of a passive optical cavity by an intracavity dispersive medium,” Phys. Rev. A 80(1), 011809 (2009). 5. D. D. Smith, H. Chang, K. Myneni, and A. T. Rosenberger, “Fast-light enhancement of an optical cavity by polarization mode coupling,” Phys. Rev. A 89(5), 053804 (2014). 6. G. S. Pati, M. Salit, K. Salit, and M. S. Shahriar, “Demonstration of a tunable-bandwidth white-light interferometer using anomalous dispersion in atomic vapor,” Phys. Rev. Lett. 99(13), 133601 (2007). 7. O. Kotlicki, J. Scheuer, and M. S. Shahriar, “Theoretical study on Brillouin fiber laser sensor based on white light cavity,” Opt. Express 20(27), 28234–28248 (2012). 8. K. Myneni, D. D. Smith, H. Chang, and H. A. Luckay, “Temperature sensitivity of the cavity scale factor enhancement for a Gaussian absorption resonance,” Phys. Rev. A 92(5), 053845 (2015). 9. H. N. Yum, M. Salit, J. Yablon, K. Salit, Y. Wang, and M. S. Shahriar, “Superluminal ring laser for hypersensitive sensing,” Opt. Express 18(17), 17658–17665 (2010). 10. G. S. Pati, M. Salit, K. Salit, and M. S. Shahriar, “Simultaneous slow and fast light effects using probe gain and pump depletion via Raman gain in atomic vapor,” Opt. Express 17(11), 8775–8780 (2009). 11. J. Scheuer and S. M. Shahriar, “Lasing dynamics of super and sub luminal lasers,” Opt. Express 23(25), 32350– 32366 (2015). 12. W. F. Krupke, R. J. Beach, V. K. Kanz, and S. A. Payne, “Resonance transition 795-nm rubidium laser,” Opt. Lett. 28(23), 2336–2338 (2003). 13. M. O. Scully, and W. E. Lamb, Laser Physics (Westview, 1974). 14. H. C. Bolton and G. J. Troup, “The modification of the Kronig-Kramers relations under saturation conditions,” Philos. Mag. 19(159), 477–485 (1969). Vol. 24, No. 24 | 28 Nov 2016 | OPTICS EXPRESS 27444 #273713 http://dx.doi.org/10.1364/OE.24.027444 Journal © 2016 Received 31 Aug 2016; revised 20 Oct 2016; accepted 26 Oct 2016; published 16 Nov 2016 Corrected: 18 November 2016 15. G. J. Troup and A. Bambini, “The use of the modified Kramers-Kronig relation in the rate equation approach of laser theory,” Phys. Lett. 45(5), 393–394 (1973). 16. H. N. Yum and M. S. Shahriar, “Pump-probe model for the Kramers-Kronig relations in a laser,” J. Opt. 12(10), 104018 (2010). 17. T. Skettrup, T. Meelby, K. Faerch, S. L. Frederiksen, and C. Pedersen, “Triangular laser resonators with astigmatic compensation,” Appl. Opt. 39(24), 4306–4312 (2000). 18. Z. Zhou, J. Yablon, M. Zhou, Y. Wang, A. Heifetz, and M. S. Shahriar, “Modeling and analysis of an ultrastable subluminal laser,” Opt. Commun. 358, 6–19 (2016). 19. M. S. Shahriar, Y. Wang, S. Krishnamurthy, Y. Tu, G. S. Pati, and S. Tseng, “Evolution of an N-level system via automated vectorization of the Liouville equations and application to optically controlled polarization rotation,” J. Mod. Opt. 61(4), 351–367 (2014). 20. D. A. Steck, “Rubidium 85 D Line Data,” available online at http://steck.us/alkalidata (revision 2.1.6, 20 September 2013). 21. D. A. Steck, “Rubidium 87 D Line Data,” available online at http://steck.us/alkalidata (revision 2.0.1, 2 May 2008). 22. Y. Wang, Z. Zhou, J. Yablon, and M. S. Shahriar, “Effect of multi-order harmonics in a double-Raman pumped gain medium for a superluminal laser,” Opt. Eng. 54(5), 057106 (2015). Introduction Optical interferometry is currently the standard technique for making many of the most precise measurements, but there still exist many applications for which even the best interferometers are not sensitive enough, and some applications for which higher sensitivity will always be of interest.Over the last few years, significant effort has been underway towards theoretical understanding and experimental realization of superluminal lasers, with the ultimate goal of metrological sensitivity enhancement [1][2][3][4][5][6][7][8][9][10][11].It has been shown that the sensitivity of a ring laser with respect to cavity length perturbation is inversely proportional to the group index of the material inside the cavity [3,8].By definition, the group velocity of a superluminal laser exceeds the velocity of light in vacuum, which means that the group index, g n , is less than unity.In principle any value of g n can be achieved with the proper choice of experimental parameters; therefore creating a laser with group index arbitrarily close to zero over a significant bandwidth is the ultimate goal of superluminally-enhanced laser interferometry. Previously we described a general scheme for realizing a superluminal laser in which the laser gain profile contains a narrow absorption dip at the center [9].Within the range of this dip, the lasing beam itself experiences anomalous dispersion.Figure 1(a) shows a typical gain profile ( " χ − ) within this range, while the corresponding variation in the index ( ' χ ) is shown in Fig. 1(b).The slope of the index at the center of the dip is negative, which corresponds to anomalous dispersion and therefore superluminal operation.With properly-tuned parameters, 1 g n − can exceed 10 5 over a significant bandwidth, leading to sensitivity enhancement of more than five orders of magnitude.This is calculated and described in detail in [9].We have demonstrated this scheme by using Raman depletion inside a Diode-Pumped Alkali Laser (DPAL) cavity, as shown in Fig. 2. The process starts by creating a ring laser for which gain is provided by an optically-pumped Rubidium vapor cell containing a highpressure buffer gas [12].A Raman probe is then created by taking a piece of the output beam and frequency-shifting it by an amount that matches the ground-state hyperfine splitting in 85 Rb.Another vapor cell (without buffer gas) is placed inside the cavity and is optically pumped so that population imbalance between the two hyperfine ground states is achieved.This leads to gain in the Raman probe, accompanied by depletion of the intra-cavity beam, which effectively serves as the Raman pump in this process.The goal of this paper is to present our experimental work regarding the realization of such a scheme, and to compare our findings with the numerical model which we have developed.In the next section, we review briefly the analytical description of the superluminal lasing process, which makes use of idealized gain and dip profiles.We then present our design of the superluminal laser, and describe a more comprehensive numerical model based on the density matrices of the atomic systems that provide the DPAL gain and Raman depletion.Finally, we show the experimental results and compare them to predictions made by the numerical model.The correspondence between theory and experiment suggests that our laser is operating in the superluminal regime. Analytical model of a laser with idealized Lorentzian gain and dip To understand and quantify the enhancement effects in a superluminal laser, it is important to review briefly the semi-classical equations of motion for a single-mode laser [13].The phase and amplitude of the electric field in an optical cavity are governed by the following equations: 2 2 where ν is the frequency, and ϕ and E are the phase and amplitude of the lasing field.c Ω is the empty-cavity resonant frequency for a particular longitudinal mode, and Q is the quality factor of the empty cavity.The material susceptibility is characterize the sensitivity of the laser output frequency with respect to cavity length change in a filled cavity and an empty cavity, respectively.The ratio between these two derivatives, ( ) ( ) the factor by which the presence of the intra-cavity medium enhances ( R > 1) or reduces ( R < 1 ) frequency shift sensitivity.Solving Eqs. ( 1) and (2) in the steady state results in: To relate this enhancement factor to the group index, we start with: ( ) In a dilute gas such as Rubidium vapor, the magnitude of R χ is far less than unity, so that only the first two terms of Eq. ( 4) are relevant.The group index g n is the ratio between the vacuum speed of light c and group velocity, and can be expressed as follows: Therefore the enhancement factor R from Eq. ( 3) is the reciprocal of the group index.To obtain an analytic expression for R , we must first obtain an analytic expression for ( ) We consider a Lorentzian gain medium with a Lorentzian inner dip, both centered at o ν ν = , so that the imaginary part of the susceptibility ( ) I ν χ , which represents gain (negative values) or loss (positive values), can be written as: where the subscripts g and d refer to "gain" and "dip" respectively. G is the strength parameter which depends on the number density of atoms as well as atomic dipole moment and transition linewidth.Applying the Modified Kramers-Kronig (MKK) relations [14][15][16] to Eq. ( 6) results in the following expression for the dispersion: With an analytic expression for ( ) R χ ν , an analytic expression for R is obtainable using Eq. (3). Superluminal laser design The overall design for the superluminal laser is illustrated schematically in Fig. 2(a).The diagram of the energy levels and optical fields in the gain cell is shown in Fig. 2(b).The gain cell contains vapor of naturally-occurring Rubidium (72.16% 85 Rb and 27.84% 87 Rb) mixed with high-pressure Ethane buffer gas, which induces rapid collisional relaxation from the P 3/2 to the P 1/2 manifold.This leads to population inversion and gain on the D 1 line.The magnitude and bandwidth of the gain depend on the rate of collisions between the Rubidium atoms and the buffer gas molecules, so that the gain spectrum can be tuned to some extent by varying the buffer gas pressure. The dip cell contains pure 85 Rb vapor with no buffer gas.The two ground-state hyperfine levels are denoted as 1 and 2 ; however it is important to note that these states themselves are composed of five and seven Zeeman sublevels, respectively.c)] is far less than the thermal energy at room temperature, so that if each Zeeman sublevel is equally-populated, then the composite states 1 and 2 contain 5/12 and 7/12 of the total atomic population, respectively.In order to produce Raman gain and depletion, it is necessary to produce first a population imbalance between two different Zeeman sublevels.This is the purpose of the Raman optical pump [Fig. 2(c)] , which transfers atoms from the Zeeman sublevels in state 1 to the Zeeman sublevels in state 2 . The Raman probe is produced by frequency-shifting the laser output with an acousto-optic modulator (AOM) by approximately 3.0357 GHz, which is the ground-state hyperfine splitting in 85 Rb.Two-photon resonance between the Raman probe and the lasing beam results in amplification of the Raman probe, at the expense of the lasing beam which thus experiences depletion.In general, this Raman gain/depletion process is most efficient when the Raman pump and probe are slightly detuned from resonance, so that single-photon absorption is minimized. The cavity contains a flat output coupler and two curved high reflectors.Cavity length and reflector radii of curvature are specifically chosen so that a stable and astigmatism-free mode exists without the use of an intra-cavity lens [17].To avoid Zeeman splitting, the dip cell is placed inside a mu-metal box.The optical isolator prevents directional mode competition, while the etalon prevents longitudinal mode competition and enables manual tunability of optical path length.Transverse mode competition can be eliminated with the proper choice of optical pump beam radius, or with the insertion of an intra-cavity aperture. Numerical model of a superluminal laser In order to calculate and analyze accurately the dynamics of the DPAL-Raman system described in section 3, it is necessary to use numerical methods.Though the analytic model from section 2 is instructive, it relies on the assumption that the gain and dip profiles are Lorentzians with fixed parameters.While this is a good first-order approximation, the exact characteristics of the gain and dip profiles depend on many interconnected variables.Our numerical approach relies on solving the single mode laser equations [Eqs.(1) and (2)] and the density-matrix equations simultaneously and iteratively until a self-consistent, steady-state solution is found.The algorithmic procedure is illustrated in Fig. 3; a more detailed discussion of this algorithm can be found in [18]. Calculation of the density matrix of the gain system The gain cell contains vapor of naturally-occurring Rubidium mixed with Ethane buffer gas.Both isotopes are modeled as four-level systems.Figure 4 illustrates this system for the 85 Rb isotope.Here 1 and 2 are the F = 2 and F = 3 hyperfine states, respectively, in the 5S 1/2 manifold.3 is the entire 5P 1/2 manifold, and 4 is the entire 5P 3/2 manifold.For 87 Rb, 1 and 2 are the F = 1 and F = 2 hyperfine states, respectively, in the 5S 1/2 manifold.3 is the 5P 1/2 manifold, and 4 is the 5P 3/2 manifold.The buffer gas causes rapid dephasing in both isotopes, thus producing homogeneous broadening of several tens of GHz, depending on the buffer gas pressure.Therefore it is unnecessary to consider the hyperfine sublevels within the 5P 1/2 and 5P 3/2 manifolds.The BGI (buffer gas-induced) broadening is also wider than the splitting between 1 and 2 , so that the optical pump excites atoms from both 1 and 2 into 4 .The strengths of the 1 -4 and 2 -4 transitions (i.e.their Rabi frequencies) are assumed to be equal for simplicity, and denoted as Ω P .The 1 -3 and 2 -3 transitions, which are coupled by the intra-cavity lasing beam itself, are also assumed to be equal in strength, and are labeled as L Ω .3R G and 4R G are the inverse radiative lifetimes of the 5P The Liouville equation [18,19] is the density matrix equation of evolution: where G ρ  and G H   are the density matrix and the modified Hamiltonian, respectively, of the gain medium in the rotating wave basis.In Eq. ( 8), the second term accounts for equality between net population inflows and outflows, and the third term accounts for the BGI dephasing between different states. The modified rotating-wave Hamiltonian of this gain system is expressed as: The source term in Eq. ( 8) is: All six of the inter-level transitions are assumed to have equal BGI dephasing rates, denoted as d G , so that the last term in Eq. ( 8) is: Calculation of the density matrix of the dip system The dip cell contains pure 85 Rb vapor with no buffer gas, and has three different beams going through it, as shown in Fig. 5(a).Since the optical pump transfers population from level 1 to 2 , it can be modeled as an effective decay rate, denoted as OP G .This enables the problem to be analyzed as an effective 3-level system in which the decay rate from 1 to 2 is now The density matrix equation of evolution for the dip system does not have a BGI dephasing term since the dip cell contains no buffer gas.Therefore: where: and: Effective susceptibility of the superluminal laser Due to BGI broadening in the gain cell, the lasing beam interacts with the 1 -3 and 2 - 3 transitions in both the 85 Rb and 87 Rb isotopes.The susceptibility of this beam to 85 Rb atoms in the gain cell, 85 G χ − , is related to the density matrix through the following relation where G n is the number density of gain atoms, and the "85" subscript refers to the 85 Rb isotope.I SAT (13) and I SAT(23) are the effective saturation intensities of the 1 -3 and 2 -3 transitions, respectively.These quantities are calculated by averaging the saturation intensities of all constituent Zeeman transitions for σ + or σ -excitation (the actual polarization is linear, which consists of equal parts of σ + and σ -).I SAT (13) and I SAT(23) are found to be 8.347 mW/cm 2 and 6.283 mW/cm 2 , respectively [20,21].A similar expression applies for 87 G χ − , the susceptibility of 87 Rb atoms, with I SAT (13) and I SAT(23) equaling 7.011 mW/cm 2 and 4.531 mW/cm 2 , respectively.The total susceptibility in the gain cell, G χ , is therefore: where 72% and 28% are the natural abundance of these isotopes.There is no BGI broadening in the dip cell, so that the lasing beam only couples the 1 -3 transition in the 85 Rb isotope.Therefore the susceptibility in the dip cell is: Thus the effective susceptibility of the whole system is: where L is cavity length, and G L and D L are the gain cell and dip cell lengths, respectively.The numerical model then solves the single-mode laser equations and determines the value of eff χ iteratively until a self-consistent steady-state solution is reached.Details regarding such an algorithm can be found in [18,19]. Experimental results and comparison with numerical model Direct measurement of superluminal enhancement requires a high degree of classical noise suppression as well as fast servos to maintain a high degree of stability of the laser sources.In our experiment, these systems are not yet precise enough to perform direct measurements of enhancement.Effort is in progress to improve the stability of our apparatus to the level necessary for such a measurement.However, it is still possible to infer the degree of superluminal enhancement through a careful measurement and characterization of the dip observed in the laser output, and comparing the result with the theoretical model.Specifically, we first show that the theoretical results are in close agreement with the observed data for a range of sets of parameters.For each set of parameters, we then use the theoretical model to infer the corresponding group indices and superluminal enhancement factors. Details of experimental setup The experimental apparatus and setup are shown schematically in Fig. 6.The laser cavity is an equilateral triangle with each leg 24 cm in length.The output coupler is flat and has a reflectivity of 60% while the other two mirrors are high reflectors with 20 cm radii of curvature.As mentioned previously, these cavity dimensions and radii of curvature are chosen so that the laser output is astigmatism-free without the use of an intra-cavity lens.The cavity mode has two waists which are located at the output coupler and the center of the gain cell.The gain cell is made with ConFlat® components which support high vacuum, and the windows are anti-reflection coated on both sides to minimize roundtrip loss.The gain cell is connected to an oven containing an ampoule of naturally-occurring Rubidium.A cylinder containing research-grade ethane gas is also connected to the gain cell, and the ethane pressure is controlled with a regulator.The oven and gain cell are each wrapped with heating wire.The windows are wrapped with a separate heating wire which is kept at a slightly higher temperature than the rest of the cell, in order to prevent condensation of Rb vapor on the windows. The optical pump is produced by amplifying the output of a Toptica tunable diode laser with a Sacher Lasertechnik tapered amplifier (TA).This optical pump beam is s-polarized so that it is reflected into the gain cell by a polarizing beam splitter (PBS).A PBS at the other end of the gain cell expels the portion of optical pump not absorbed by the gain atoms, so that the optical pump does not make it through the output coupler.Because of the presence of the PBS's, only p-polarized light experiences roundtrip gain.Thus, the laser output is p-polarized.The optical isolator prevents directional mode competition by ensuring lasing in only one direction, but it also rotates the input light by 45°.Thus, a half-wave plate is placed directly after the isolator in order to rotate the light back to p-polarization.Because the gain profile is several GHz wide, while the cavity free spectral range is approximately 400 MHz, an etalon is necessary to eliminate longitudinal mode competition and mode hopping.Additionally, rotation of the etalon provides tunability of roundtrip optical path length and therefore detuning of the DPAL output frequency. A fraction of the DPAL output goes to a photodetector, while the remainder is diverted to an acousto-optic modulator (AOM).The frequency of the modulating acoustic signal is approximately 1.518 GHz (half the ground-state hyperfine splitting in 85 Rb) and is produced by a voltage-controlled oscillator (VCO).The sidebands are separated spatially, and the firstorder upshifted sideband is reflected back into the AOM to produce a double-shifted beam, which is then diverted with a beam splitter.This upshifted Raman probe beam has a maximum power of a few hundred microwatts, which is not strong enough to provide the range of Raman depletion necessary for comprehensive characterization thereof.Thus, the Raman probe is amplified through another TA.In addition to the amplified beam, the TA produces some stimulated emission with a spectrum hundreds of GHz wide.The holographic grating separates all unwanted frequency components from the TA output so that the Raman probe beam is spectrally pure.This Raman probe is then injected into the Raman cell with a PBS.In order to suppress sensitivity to Doppler broadening, it is necessary for the Raman probe to propagate in the same direction as the intra-cavity lasing beam for maximum twophoton interaction.The Raman chamber is a sealed quartz cell containing pure 85 Rb vapor.In order to prevent Zeeman splitting, heating wire is wrapped around the cell bifilarly, and the cell is housed inside a mu-metal box.A fraction of the optical pump (the same one as used in the gain cell) is used as the optical pump for the Raman cell.The direction of propagation of the optical pump beam does not matter since it provides one-photon excitation.As such, injecting the optical pump in the counter-propagating direction circumvents the need to combine the Raman probe and the optical pump with a beam splitter, which would waste half of the power and require more optical components. Theoretically the Raman interaction should be maximized when the frequency difference between the Raman pump and the Raman probe, Δ, is equal to ω 21 , the 85 Rb ground-state hyperfine splitting.In this experiment, δ AOM (which equals Δ-ω 21) is scanned about a central value of zero [Fig.5(b)].The photodetector [Fig.6] monitors the laser output power versus δ AOM .This measurement is made for several different values of Raman probe power. In this experiment, the gain cell and the dip cell are each 10 centimeters in length.The gain cell is heated to a temperature of 120°C and contains ethane with pressure of 0.06 atm, while the dip cell is at a temperature of 100°C.The DPAL optical pump is 200 µm in radius and 1.2 W in power, while the Raman optical pump is 1000 µm in radius and 10 mW in power.There appears to be reasonable qualitative and quantitative agreement between the numerical model and the experimental data.The depth and width of depletion both increase monotonically with increasing Raman probe power, with values matching reasonably well between theory and experiment.However, there are two main discrepancies to note: First, the experimental data appears to have a higher degree of asymmetry than the numerical model, especially for higher Raman probe power.Second, the calculation shows the dip shifting towards the left with increasing Raman probe power, whereas the dip location is roughly constant in the experimental data. Experimental results and comparison to numerical simulations There are a few potential sources of these discrepancies.First, our model for the atomic transitions is somewhat simplified, both for the DPAL gain and Raman depletion.Specifically, states 1 , 2 , 3 and 4 each contain several Zeeman sub-levels, but are treated as a single state.Since the DPAL gain is quite broad, ignoring these details is not likely to affect the numerically-calculated DPAL gain spectrum significantly.Furthermore, for the data shown in Fig. 7(a), the parameters affecting DPAL gain are held at constant values.Thus, simplification in the modeling of the DPAL gain is unlikely the source of the discrepancy.In contrast, the Raman depletion process has a much narrower bandwidth, and the details of the energy levels mentioned above may affect the spectral shapes of the dip.In the Raman cell, only the 85 Rb isotope is relevant, and the details of state 4 can be left out since it is used only for optical pumping.Taking into account the details of states 1 , 2 and 3 would require us to consider a total of 24 Zeeman sublevels.The matrix that determines the evolution of the system (based on the density matrix approach) has a size of N 2 × N 2 , where N is the number of energy levels.Thus, accounting for every Zeeman sublevel would increase the size of this matrix by a factor of (24/3) 4 = 4096.Given that our algorithm for solving the laser power and frequency is iterative, such an increase would enormously inflate the computation time, thus making it very difficult to explore the parameter space. It is also important to note that in the numerical model for the Raman depletion, we have assumed that the Raman pump couples only to the 2 -3 transition, and not to the 1 -3 transition, while the Raman probe couples only to the 1 -3 transition, and not the 2 -3 transition.In reality, for the Raman pump and Raman probe, there is coupling to both of these transitions, and the difference between the degree of coupling to the 1 -3 transition and the 2 -3 transition depends on the detuning with respect to state 3 .However, developing codes that go beyond this approximation is difficult, because it is no longer possible to make the rotating wave approximation, and one must take into account higher order harmonics of the beat note between the Raman pump and the Raman probe [22].This approximation may account for the absence of asymmetry in the theoretical results.In the near future, we will develop a more comprehensive code that will not make these approximations and use it to determine whether the discrepancies between experiment and theory can be eliminated.is increased relative to the slope of the dotted line.As expected, the effects which create superluminal enhancement become more pronounced with increasing values of Raman probe power.For example, a Raman probe power of 24 mW yields an enhancement factor as high as 190( = 10 2.28 ), as shown in Fig. 8(b).In principle, the enhancement factor can be several orders of magnitude greater than unity with the proper choice of experimental parameters.The frequency range of this enhancement is limited by higher-order nonlinear terms in the effective dispersion profile of the lasing beam.As mentioned previously, we are currently working on achieving high enough stability in the experiment so that we can directly measure this enhancement factor. Summary and future work We have demonstrated a laser with a narrowband dip within a broader gain profile, the behavior of which matches reasonably well with our computational density matrix model.With the proper choice of parameters, such a laser can be several orders of magnitude more sensitive to perturbations in cavity length than a conventional laser interferometer.Therefore, such a device has the potential to advance the field of high-precision metrology, with applications such as vibrometry, accelerometry and rotation sensing.Among our immediate goals are improving the quality and stability of our measurement systems so that we can directly measure superluminal effects.We are also investigating alternative schemes for realizing superluminal enhancement.Comparison of our theoretical model with experimental results indicates that the observed dip for one set of parameters corresponds to a superluminal laser with an enhancement factor of ~190, and much higher values of this factor can be realized with further tuning of parameters. Fig. 2 . Fig. 2. (a) Schematic of superluminal laser; (b) Energy levels and optical fields in the gain cell; c) Energy levels and optical fields in the dip cell Fig. 3 . Fig. 3. Flow chart illustrating the iterative algorithm used to calculate laser output frequency and amplitude. Fig. 4 . Fig. 4. Illustration of the energy levels, optical fields and decay rates for the gain cell. Fig. 5 . Fig. 5. (a) Illustration of the energy levels, optical fields and decay rates for the dip cell; (b) Effective 3-level system in which the optical pump is equivalently modeled as a decay rate Fig. 6 . Fig. 6.Schematic of the experimental setup for realizing a superluminal laser.See text for details. Figure 7 Figure 7 shows the DPAL output power vs. the pump-probe detuning for six different values of Raman probe power.Figure 7(a) shows the experimentally observed signals, while Fig. 7(b) shows the corresponding numerical results.In each case, the horizontal axis is AOM Figure 7 shows the DPAL output power vs. the pump-probe detuning for six different values of Raman probe power.Figure 7(a) shows the experimentally observed signals, while Fig. 7(b) shows the corresponding numerical results.In each case, the horizontal axis is AOM δ Figure 8 ( a) shows the theoretical change in the laser frequency versus changes in roundtrip cavity length, while Fig. 8(b) shows the corresponding enhancement factors.The enhancement factor can be thought of as the factor by which the slope of / f L ∆ ∆ in Fig. 8(a) Fig. 8 . Fig. 8. (a) Frequency shift versus cavity length change for various values of Raman probe power.The dotted line represents change in DPAL output frequency vs. cavity length change for a conventional laser without Raman depletion; (b): Sensitivity enhancement factors (log scale), calculated as the ratio of the slope of / f L ∆ ∆ with Raman depletion to the slope of are determined only by the ratios of the Zeeman degeneracies, and are equal to 7/5 and 2, respectively for 85 Rb.
7,147
2016-11-28T00:00:00.000
[ "Physics" ]
LILRB4 Decrease on uDCs Exacerbate Abnormal Pregnancy Outcomes Following Toxoplasma gondii Infection Toxoplasma gondii (T. gondii) infection in early pregnancy can result in miscarriage, dead fetus, and other abnormalities. The LILRB4 is a central inhibitory receptor in uterine dendritic cells (uDCs) that plays essential immune-regulatory roles at the maternal–fetal interface. In this study, T. gondii-infected human primary uDCs and T. gondii-infected LILRB4-/- pregnant mice were utilized. The immune mechanisms underlying the role of LILRB4 on uDCs were explored in the development of abnormal pregnancy outcomes following T. gondii infection in vitro and in vivo. Our results showed that the expression levels of LILRB4 on uDCs from normal pregnant mice were obviously higher than non-pregnant mice, and peaked in mid-gestation. The LILRB4 expression on uDC subsets, especially tolerogenic subsets, from mid-gestation was obviously down-regulated after T. gondii infection and LILRB4 decrease could further regulate the expression of functional molecules (CD80, CD86, and HLA-DR or MHC II) on uDCs, contributing to abnormal pregnancy outcomes. Our results will shed light on the molecular immune mechanisms of uDCs in abnormal pregnancy outcomes by T. gondii infection. INTRODUCTION Toxoplasma gondii (T. gondii) is an obligate intracellular parasite capable of infecting a wide range of mammalian hosts including humans. Infection by T. gondii during pregnancy can cause severe sequelae, such as spontaneous abortion, stillbirth, low birth weight, and significant birth defects for surviving neonates (Robbins et al., 2012). Although several immune mechanisms have been postulated Zhao et al., 2013;, the detailed mechanisms underlying adverse pregnancy outcomes following T. gondii infections need to be further explored. The microenvironment at the maternal-fetal interface plays an important role in maintaining normal pregnancy (Hunt and Robertson, 1996). Multiple immune cells and cytokines at the maternal-fetal interface participate in protecting the semi-allogeneic embryo from maternal attack and promote immune tolerance during pregnancy (Guleria and Sayegh, 2007). Among these immunocompetent cells at the maternal-fetal interface, antigen-presenting cells (APCs) are regarded as important participants in immune regulation during pregnancy (Della Bella et al., 2011). Dendritic cells are essential for the initiation of primary immune responses and have been reported to induce immunological tolerance and to regulate cell-mediated immune responses (Langenkamp et al., 2000). Uterine dendritic cells (uDCs) are scattered throughout the decidualized endometrium throughout gestation and play vital immune-regulatory role at the maternal fetal interface (Laskarin et al., 2007). In normal human early pregnancy, uDCs include BDCA-1 + CD19 − CD14 − myeloid DC type 1 (MDC1), BDCA-3 + CD14 − myeloid DC type 2 (MDC2), and BDCA-2 + CD123 + plasmacytoid DC (PDC) subsets (Ban et al., 2008). MDCs have been reported to induce certain forms of immunity responsible for the maintenance of a normal pregnancy (Gardner and Moffett, 2003). Mice uDCs have been classified into two distinct subsets, CD11c + CD8α − and CD11c + CD8α + . The CD11c + CD8α − subset belongs to the myeloid lineage and comprises the vast majority of uDCs, mainly exhibiting an immature phenotype and contributing to the induction of maternal-fetal immune tolerance (Blois et al., 2004). In the periphery, tolerogenic status for DCs is characterized by a low level of the co-stimulatory molecules CD80 and CD86 and high expression of the inhibitory receptors LILRB4 (also called ILT3, gp49B, CD85k) (Adorini et al., 2004). The inhibitory receptor LILRB4, which is mainly expressed on professional APCs, belongs to immunoglobulin superfamily members and contains an immune-receptor tyrosine-based inhibitory motif (ITIM) in their intracellular domains to transduct negative signals (Cella et al., 1997;Kim-Schulze et al., 2006). LILRB4-expressing APC plays prominent roles in controlling inflammation by inhibiting the expression of costimulatory molecules (Chang et al., 2009;Vlad et al., 2010). Further evidence indicated that over-expression of LILRB4 can inhibit the transcription of NF-κB-dependent genes that encode co-stimulatory molecules (CD80, CD86) in DCs (Cella et al., 1997). Functional studies have suggested that LILRB4 neutralization can enhance antigen presentation (Regnault et al., 1999;Kasai et al., 2008). At the fetal-maternal interface, in vivo studies showed that LILRB4 mRNA has been detected in murine uterine endometrium during early pregnancy (Matsumoto et al., 1997), and LILRB4 protein expression was detected on immature uDCs of human decidual tissue using flow cytomety (Ban et al., 2008). Our previous study showed that uDCs contribute to abnormal pregnancy outcomes caused by T. gondii infection in early pregnancy . Most importantly, our recent study has reported that LILRB4 on decidual macrophage was involved in the development of abnormal pregnancy outcomes during T. gondii infection (Li et al., 2017). Whether LILRB4 on uDCs also contributed to abnormal pregnancy outcomes after T. gondii infection remains unclear, and the associated mechanisms are also unknown. Hence, in the present study, T. gondii-infected human uDCs and T. gondii-infected LILRB4 −/− pregnant mice were used to explore the mechanisms related to LILRB4 on uDCs that lead to abnormal pregnancy outcomes. Maintenance of T. gondii Tachyzoites (RH Strain) The T. gondii tachyzoites were cultured in HEp-2 cells in Minimum Essential Media (MEM) (Hyclone, United States), 5% fetal bovine serum (FBS; Gibco, United States), and 100 IU/ml penicillin/streptomycin (Sigma-Aldrich, United States). After culture, tachyzoites were centrifuged at 1500 rpm (433 × g) for 10 min, and purified tachyzoites were resuspended in MEM and counted using a Neubauer chamber. The experiment was carried out in BSL-2 laboratories. All the liquids, consumables and labwares contaminated with the parasites were collected and steeped immediately in disinfectant, and sterilized by highpressure sterilizer. The mice carcasses were collected in ice locker and transported out by the professionals of public health agencies. Human Abortion Sample Collection Abortion samples of the first-trimester decidua (6-12-week gestation) from elective termination procedures were obtained from Yantai Affiliated Hospital of the Binzhou Medical University. All patients signed an informed consent form before enrollment, and sample collection for this study was approved by the ethics Committee of Binzhou Medical University. The decidual tissues were rinsed in sterile saline solution and rapidly transferred to ice-cold Roswell Park Memorial Institute medium (RPMI). All samples were disposed as soon as possible. Human sample collection procedures for this study were approved by the Binzhou Medical University Ethics Committee, and all patients provided written informed consent for the collection of samples and subsequent analysis. Human Decidual Cell Preparation and Flow Cytometry Analysis Decidual tissues were washed 4-5 times with RPMI medium. Pieces of decidual tissue were minced using the gentle MACS TM Dissociator (Mitenyi-Biotec, Germany) according to the manufacturer's instructions, and then digested in 0.1% collagenase type IV and 25 IU/ml DNase I (both from Sigma-Aldrich, St. Louis, MO, United States) in RPMI for 45 min at 37 • C with gentle rotation. Single cell suspension was filtered twice with a 75 µm pore size nylon mesh (Miltenyi Biotec GmbH, Bergisch Gladbach, Germany) and centrifuged at 2000 rpm (771 × g) for 10 min at room temperature. The mononuclear cells were then isolated by density gradient separation over a standard Ficoll-Hypaque Lymphoprep (1.077, Haoyang Biological Manufacture, Co., Tianjin, China). Decidual mononuclear cells were collected from the interface and were washed twice in cold phosphate buffer solution (PBS). Then the cells were incubated for 12 h in RPMI supplemented with 10% FBS, 100 IU/ml penicillin, and 100 IU/ml streptomycin in a humidified incubator at 37 • C with 5% CO 2 . After 12 h of culture, the cells for the infected group were co-cultured with T. gondii tachyzoites at a ratio of 1:2 for 12 h in 6-well culture plates. The LILRB4neutralized infected group was infected at the same condition in the presence of anti-IILRB4 neutralizing antibodies (mAbs) (10 µg/mL, eBioscience, United States). The uninfected group was considered as control. Cells were incubated for 12 h in the same condition as described above. The mononuclear cells were collected and stained with fluorophore-conjugated mAbs: FITCconjugated anti-CD1c (BDCA-1), FITC-conjugated anti-CD303 (BDCA-2), FITC-conjugated anti-CD141 (BDCA-3) (all from Miltenyi Biotec GmbH, Bergisch Gladbach, Germany), PerCP-Cy5.5-conjugated anti-CD14, CD19, CD123, PE-conjugated anti-HLA-DR, PE-conjugated anti-CD80, PE-conjugated anti-CD86 (all from BD Pharmingen, United States), and APC-conjugated anti-LILRB4 (eBioscience, United States). The mAbs were added according to the manufacturer's protocol. After incubation, the cells were washed twice with PBS and resuspended in 600 µl of PBS. The cells were detected using BD FACSAria flow cytometry (Becton Dickinson, United States) and the data were analyzed using BD FACSDiva 7.0 software (Becton Dickinson). Animal Models Wild type (WT) female mice (Beijing Weitong Lihua Experimental Animal Technical, Co., Ltd.) and LILRB4 −/− female mice (Experimental Animal Division RIKEN BioResource Center, Japan) at 6-to 8-week-old were mated with the corresponding 8-to 10-week-old male mice overnight at a ratio of 2:1 and were checked for vaginal plugs the next morning. Females with a vaginal plug were segregated and designated as gestational day 0 (Gd 0). The infected group were inoculated intraperitoneally (i.p.) with 400 tachyzoites in 200 µl sterile PBS on Gd 8. The uninfected groups were inoculated with equivalent PBS at the same time. The protocol of animal experiment was approved by the Committee on the Ethics of Animal Experiments of the Binzhou Medical University. All procedures were performed under sodium pentobarbital anesthesia, and all efforts were made to minimize suffering of the animals. Mice Mononuclear Cell Preparation and Flow Cytometry Mononuclear cells from mouse uterine and placenta were prepared as previously described . Briefly, mice were sacrificed on Gd 14. The uterus and placenta were excised with surgical cuts. The pregnancy outcome was reflected by the total number of fetuses, fetal size, stillbirth, absorptive fetus, and hemorrhagic appearance. The uterus and placentas were washed with sterile PBS, minced into small pieces. Dispersed cells were collected by filtration through a 48 µm pore size stainless steel mesh. Thereafter, the mononuclear cells were obtained by density-gradient centrifugation and washed twice in cold PBS. The following fluorochrome-conjugated, mousespecific mAbs were used in the assays: FITC-conjugated anti-CD11c (BD Biosciences, United States), Percp-cy5.5-conjugated anti-CD8a mAb (BD Biosciences, United States), PE-conjugated anti-CD80 (BD Biosciences, United States), PE-conjugated Statistical Analysis Data were presented as the mean ± SEM. Statistical analysis was performed using the SPSS statistics software package (SPSS 17.0; SPSS, Inc., Chicago, IL, United States). Unpaired t-tests were used after verifying that the data of the groups had a normal distribution by SAS to compare two independent groups. p < 0.05 was regarded as significant and p < 0.01 was considered as very significant. The Abnormal Pregnancy Outcomes Were Related to LILRB4 in T. gondii Infected Mice Toxoplasma gondii infection during early pregnancy can cause abnormal pregnancy outcomes. In order to explore the effects of LILRB4 on the adverse pregnancy outcome caused by T. gondii, we observed the pregnancy outcomes of T. gondii infected LILRB4 −/− mice. The results were in accordance with our previous studies (Li et al., 2017). In both the WT and LILRB4 −/− infected pregnant group, the pregnant mice were flagging, shambling, and had erected fur, whereas the control mice were nimble, restive, and had smooth pelage ( Figure 1A). Compared with WT infection groups, most fetuses from LILRB4 −/− infected pregnant mice were evidently smaller and shapeless, placentas were obvious more hemorrhagic and resorbed ( Figure 1B). The weight of fetus or placenta from LILRB4 −/− infection group was less than that of WT infection group respectively (Figures 1C,D). The abortion rate of LILRB4 −/− infected group was significantly increased than that of WT infection group ( Figure 1E). Thus, the knockout of LILRB4 aggravate the abnormal pregnancy outcome caused by T. gondii infection. The Dynamic LILRB4 Expression on Murine uDCs in Normal Pregnancy CD11c + cells were gated as uDCs shown in Figures 2A,B. LILRB4 expression on uDCs of normal pregnant mice at Gd 5,8,10,12,14,16,18, and non-pregnant mice was detected using flow cytometry (Figure 2C). The results showed that LILRB4 expression was at a lower level on uDCs from non-pregnant mice compared with the normal pregnant mice at all the designed time points. During pregnancy, LILRB4 expression gradually increased from Gd 5 to Gd 14, peaked on Gd 14, and decreased after Gd 14. Flow cytometry analyses of LILRB4 dynamic expression on CD11c + uterine DCs were performed using non-pregnant mice and normal pregnant mice at day of gestation (Gd) 5, 8, 10, 12, 14, 16, 18, respectively. Data are shown as means ± SD for 10 pregnant mice and differences were identified using unpaired t-tests ( * p < 0.05, * * p < 0.01). LILRB4 Expression on uDCs Subsets in Both Mice and Human Was Down-Regulated After T. gondii Infection Flow cytometry analyses showed that LILRB4 was expressed on all the human MDC1, MDC2, and PDC subsets at a high level in normal pregnancy (Figures 3A,B). After T. gondii infection, the expression level of LILRB4 was significantly down-regulated on MDC2 and PDC subsets compared with the uninfected group. In normal pregnant mice, LILRB4 was expressed both on CD11c + CD8a − and CD11c + CD8a + uDC subsets. More importantly, LILRB4 expression on the tolerogenic CD11c + CD8a − uDC subset was much higher than that on the CD11c + CD8a + uDC subset (Figures 3C,D). After T. gondii infection, LILRB4 level decreased on the CD11c + CD8a − uDC subset ( Figure 3E) while increased on CD11c + CD8a + uDC subset ( Figure 3F). compared with the uninfected group. On uDCs which LILRB4 was neutralized, CD80, CD86, and HLA-DR were further up-regulated compared with the infected cells. The in vivo studies showed that the levels of CD80, CD86, and MHC II on tolerogenic CD11c + CD8α − uDC subset were significantly lower than on CD11c + CD8α + uDC subset during normal mice pregnancy (Figure 5). After T. gondii infection, CD80, CD86, and MHC II on murine CD11c + CD8α − and CD11c + CD8α + FIGURE 4 | Down-regulation of LILRB4 on human uDC subsets with T. gondii infection resulted in changes in the expression of functional molecules of human uDCs. Flow cytometry and histogram analyses of CD80 (A), CD86 (B), HLA-DR (C) and LILRB4 expression in BDCA-1 + , BDCA-2 + , and BDCA-3 + uDC subsets were performed for normal, infected, and anti-LILRB4 neutralizing antibody-treated infected human uDCs. Data are shown as means ± SD ( * p < 0.05, * * p < 0.01). Representative data from six independent experiments for each group. Frontiers in Microbiology | www.frontiersin.org uDC subsets were both up-regulated, and they were further increased in LILRB4 −/− infected mice compared with the infected mice (Figures 5A-C). And the up-regulation of these functional molecules was more obvious on the tolerogenic CD11c + CD8α − uDC subset than CD11c + CD8α + uDC subset (Figures 5D-F). DISCUSSION It was reported that the tolerogenic functions of DCs are characterized by high expression of the inhibitory receptor LILRB4 and low co-stimulatory potential (Chang et al., 2002). During normal pregnancy, LILRB4 can modulate the functions of APCs and plays important roles in immune regulation and tolerance at the maternal-fetal interface (Matsumoto et al., 2001). Although previous studies have shown that LILRB4 is expressed on uDCs during normal pregnancy, the dynamic expression levels remain unclear. In the present study, the level of LILRB4 expression throughout murine gestation was measured by flow cytometry. The results showed that LILRB4 expression was at the lower level in non-pregnant mice uDCs, gradually increased from the first trimester to the middle-late trimester, and peaking on day 14 of gestation in mice. More importantly, LILRB4 expression levels on tolerogenic mice CD11c + CD8a − uDC subsets were higher than that on other subsets during normal pregnancy. These data suggest that LILRB4 on uDCs, especially tolerogenic uDC subsets, may be beneficial for normal pregnancy. Our previous study showed that more severely adverse pregnancy outcomes was observed in T. gondii-infected LILRB4 −/− pregnant mice, that indicated LILRB4 was related to the development of abnormal pregnancy outcomes following T. gondii infection and have found that the expression of LILRB4 on decidual macrophage was detected downregulated (Li et al., 2017). Whether T. gondii infection could affect LILRB4 expression on uDCs and whether this effect subsequently contributes to abnormal pregnancy outcomes were all still unclear. In present study, in order to further investigate whether T. gondii infections could affect LILRB4 expression on uDCs subsets, LILRB4 expression was monitored on uDCs during T. gondii infection both in vitro and in vivo. The results showed LILRB4 expression to be significantly down-regulated by T. gondii infection on both human and mouse uDCs during pregnancy, and LILRB4 down-regulation were mainly on tolerogenic murine CD11c + CD8a − uDC subsets rather than on other uDC subsets. Hence, the results suggest that LILRB4 downregulation on uDCs, especially tolerogenic uDC subsets, may play a role in the development of T. gondii-mediated abnormalities. To further assess a mechanistic basis for these observations, membrane functional molecules CD80, CD86, and HLA-DR (MHC II in mice) on uDCs were examined during T. gondii infection. The functions of DCs are characterized partly by dynamic regulation of co-stimulatory molecules (CD80, CD86) and HLA-DR (McLellan et al., 1995;Nijman et al., 1995). The human decidual MDC1 (BDCA-1 + ) subset expresses low levels of CD80, CD86, and HLA-DR and is regarded as an immature decidual MDC subset involved in inducing immune tolerance (Gardner and Moffett, 2003;Mahnke et al., 2003). MDC2 (BDCA-3 + ) is similar to MDC1 with respect to the phenotype. LILRB4 high MDC2, which has a reduced allostimulatory capacity is considered a feature of tolerogenic DCs (Velten et al., 2004). PDC (BDCA-2 + ), which has a lower level of CD80 and CD86, was reported to contribute to the maintenance of normal pregnancy at the maternal-fetal interface (Miyazaki et al., 2003). So, the three subsets of u DC all are important for the normal human pregnancy. In the present study, the results showed that CD80, CD86, and HLA-DR in human MDC1, MDC2, and PDC were significantly upregulated after T. gondii infection followed LILRB4 decrease. These data suggested that T. gondii infection could significantly weaken the immune tolerogenic function in uDCs by downregulating LILRB4 expression and enhance immune-activated functions by up-regulating functional molecules CD80, CD86, and HLA-DR expression. To further clarify the role of LILRB4 in the adverse pregnancy outcome caused by T. gongdii infection, we performed experiments in which LILRB4 was neutralized with antibody in vitro to assess CD80, CD86, and HLA-DR expression. The results showed that functional molecules CD80, CD86, and HLA-DR expression in LILRB4neutralized infected human MDC1, MDC2, and PDC subsets were further up-regulated compared with the infected uDC subsets. The in vivo study showed that, followed the decrease of LILRB4, T. gondii infection significantly induced functional molecules CD80, CD86, and MHC II expressions on mice uDCs subsets, and more importantly, the up-regulation of functional molecules were more obvious on tolerogenic CD11c + CD8a − uDC subset than CD11c + CD8a + uDC subset. The results of T. gondii-infected LILRB4 −/− pregnant mice that the functional molecules (CD80, CD86, and MHC II) on the two uDC subsets were both further up-regulated compared with the infected WT mice and more obviously on the tolerogenic CD11c + CD8a − uDC subset, further confirmed the changes of the functional molecules resulted from LILRB4 down-regulation after T. gondii infection. The results demonstrated that the changes in functional molecules (CD80, CD86, and MHC II) are associated with decreased LILRB4 expression on uDCs, especially in tolerogenic uDC subsets, during T. gondii infection. A disturbance of uDC tolerance function, due to change in LILRB4 and functional molecules, may contribute to the development of abnormal pregnancy outcomes during T. gondii infection. CONCLUSION The results of this study show that down-regulation of LILRB4 on uDCs, especially on tolerogenic uDC subset, following T. gondii infection weakened immune-tolerogenic function of uDC by up-regulating functional molecules CD80, CD86, and HLA-DR (MHC II) expression and ultimately contributed to abnormal pregnancy outcomes by T. gondii infection. This investigation further shed light on the molecular immune mechanisms of uDCs in abnormal pregnancy outcomes due to T. gondii infection. AUTHOR CONTRIBUTIONS SZ, JZ, HZ, and XH designed the study and edited the manuscript. SZ, JZ, HZ, and LR performed the mouse experiments. XL, JZ, YJ, CY, LR, and MZ provided the samples and performed the patients experiments. SZ, HZ, and XH wrote the manuscript. SZ, JZ, and HZ made equal contributions to this paper. All authors read the final version of the manuscript and approved it for publication.
4,704
2018-03-28T00:00:00.000
[ "Biology", "Medicine" ]
Economic Assessment of Producing Corn and Cellulosic Ethanol Mandate on Agricultural Producers and Consumers in the United States 1Department of Ag Economics and Agribusiness, Louisiana State University Agricultural Center, Red River Research Station, 262 Research Station Drive, Bossier City, LA 71112, USA 2Texas A&M AgriLife, Office of Federal Relations, Suite 150, 1500 Research Parkway, College Station, TX 77843-2259, USA 3College of Agriculture, Auburn University, Auburn, AL 36849-5406, USA 4Department of Agricultural Economics, 600 Kimbrough Boulevard, Suite 211B, TAMU 2124, College Station, TX 77843-2124, USA Introduction Wide ranges of energy sources are being considered, such as ethanol derived from grains, cellulosic plant materials, and dedicated biomass energy crops to meet the growing energy demand. While the production of biofuels may be politically, and at first thought, economically attractive, several potential issues could prove costly in terms of land use, water use and quality, energy balance, food and animal feed availability, commodity prices, government outlays, and trade balance. With more than 80 percent of the world's food supply comprised of grains, competition between food and fuel crops for land and other resources is expected to translate into higher prices for staple foods globally [1,2]. Similar relationships were found to exist in the evaluation of the impacts of the production of grain ethanol mandates [3,4]. The US Renewable Fuel Standard (RFS) proposed to address the growing energy needs of the country and reduce reliance on imported oil mandates production increases over recent levels for 2015 and 2016. The proposed volumes of cellulosic biofuels increased from 33 million gallons in 2014 to 106 and 206 million gallons for 2015 and 2016, respectively [5]. It is predicted that US biofuels production by 2022 will only replace 7.0 percent of the nation's expected gross annual gasoline consumption [6]. However, continued support for the biofuels program through government incentives such as feedstock incentives is expected to influence expansion of energy feedstock production in the USA. Such expansion could lead to biomass crop production on marginal lands, which are typically lands that are prone to high erosion, thereby diminishing soil productivity of lands that are enrolled in some form of conservation [7]. In the USA, 2 Economics Research International land is enrolled in conservation programs through Conservation Reserve Program (CRP). Under the CRP, farmers agree to temporarily retire environmentally sensitive land from agricultural production; however, they allow grasses to grow that improve the lands health and quality [8]. It was estimated that the addition of CRP land to crop production would result in a decrease in crop prices due to increase in supply, consequently $5.0 billion per year increase in consumer surplus; however, such gains are accompanied by an expense of 145 million tons in increased annual soil erosion [9]. Similarly, it was estimated that expansion of corn ethanol production would increase nitrogen losses from corn fields by 20 percent [10]. To mitigate such nutrient and sediment loadings, substantial management costs would be incurred [11]. Such research suggests that production of feedstock for biofuels on these lands could potentially negate the environmental benefits achieved from the CRP program. This paper focuses on evaluation of the implications of dedicated biomass crop as feedstock for bioenergy production, alternative energy policies, and government initiatives on agricultural producers and consumers. The specific objectives of this study include application of a national quantitative model, Agricultural Simulation Model (AGSIM) [12], to estimate the impacts of biomass crop production intended to meet the cellulosic component of the biofuel mandate in addition to the grain-based biofuel mandate. The economic impacts estimated include effects on cropping patterns, commodity prices, fertilizer use, fertilizer prices, consumer and producer surplus, and trade balance. The base estimation is followed by sensitivity analyses based on alternative assumptions related to expiring CRP grassland acres returning to crop production and higher per acre yield of biofuel feedstock. Some recent studies have provided impacts on grain crop prices as a result of biofuel policies [13]. To produce sufficient biomass to meet the biofuel mandate, it is reasonable to assume competition for cropland between biomass crops and traditional crops, which can affect food and feed prices [9,14,15]. The current analyses improve on the existing literature [13,16] on evaluation of biofuel mandates by accounting competition for land with feed grain or other commodity crop productions across United States, that is, analyzing the impacts at a national scale rather than at regional scale. The analysis accounts for supply and demand dynamics, including export demand relationship of all major row crops in the USA. Additionally, in estimating the overall economic impacts we allowed for fertilizer price adjustments. Such a framework is an improvement to studies that analyzed economic effects of biofuel policies accounting for only the corn markets [17]. Interesting relationships could evolve such that the response in commodity prices may improve the economic position of agriculture but harm consumers and the balance of trade. Clearly, if the USA is to achieve some balance between food, fiber, timber, energy, and the resources needed for a sustainable agriculture, there is a need for a balanced evaluation of alternative biofuel policies. Model Description: AGSIM. AGSIM is an econometricsimulation model. It is based on a large set of statistically estimated demand and supply equations for major field crops and livestock feed, regionalized for the nine US Department of Agriculture (USDA) production regions [12]. AGSIM was initially developed in 1977 to evaluate the economic impacts of using corn, grain sorghum, small grains, and crop residues to convert to ethanol. The model has undergone subsequent revisions. It was used to evaluate several agricultural policies such as expansion of CRP acreage, tax on nitrogen fertilizer in the USA, and CRP land returning to production. Application of the model provides insight into expected impacts of alternative policies relative to shifts in cropping patterns, crop prices, fertilizer use, fertilizer prices, consumer and producer surplus, and trade balance [3,18]. The current version of AGSIM includes supply and utilization of major crops. The demand for each commodity is separated into imports, exports, livestock feed, food, fiber, ethanol production, other domestic uses, ending stocks, and residual use stocks, all keyed to the USDA annual baseline, as typically required by USDA and USEPA for their internal policy discussions. The USDA baseline is an annual report that provides long-run projections for the US agricultural sector and provides a basis of comparison assessing expected impacts of alternative policies and technologies [19]. For this study, exogenously specified biofuel policies are incorporated relating to cropland needed for biomass production to meet the RFS for cellulosic ethanol. The model provides an estimate of a set of prices for all commodities that simultaneously clear all markets in each year, affecting profitability and cropping patterns in subsequent years. The model also provides estimates of economic surplus, calculated as the change in price times the quantity for consumers and change in net farm income for producers. AGSIM, as applied in this study, is designed to provide estimates up to year 2031. AGSIM is different from other large-scale agricultural economy models in two ways. First, AGSIM uses a single equation to account for total planted acreage in a region, while a set of share equations allocate the total acreage to individual crops in that region. Second, acreage response is based on expected net returns per acre of crop alternatives rather than unit price, thus allowing for evaluation of policies that could change production costs and yields. Methodology. The current analysis is based on the following assumptions: (1) cellulosic-based fuels come from a dedicated-biomass crop, switchgrass (SG), (2) production will take place on US cropland primarily planted to traditional crops, and (3) no CRP land returns to crop production; that is, such land remains in pasture or other conserving use. Switchgrass is used across all US regions in the analyses as the biofuel feedstock. It is recognized as a model species for ethanol production by the US Department of Energy due to its high-yielding potential, tolerance to water and nutrient deficits, and noninvasive nature and can be grown on marginal lands [20,21]. Sensitivity analyses are conducted Economics Research International 3 Table 1: Description of the biofuel production scenarios under the base case where no conservation reserve program land returns to crop production. where the CRP restriction is released; that is, marginal lands previously in conservation use but currently not under a contract are allowed to return to crop production. In addition, the analyses also estimate the economic implications of potentially higher per acre yield of biomass crop. The basic underlying parameter is that the RFS mandate will be met starting with 250 million gallons in 2011 and reaching 16 billion gallons production by 2022 [6]; however, it is recognized that the RFS for biomass-based fuels is being reduced based on updated market assessments. Four production scenarios are developed for evaluation, each with alternative levels of biofuels mandates initially assumed. Brief descriptions of the scenarios are presented in Table 1. Utilization of a uniform yield level to identify the number of biomass acres for each of the US farm resource regions might either overestimate or underestimate the economic impacts of biomass crop production. Due to lack of data on SG yields for the US farm resource regions specified in AGSIM, hay yield for each region is used as a proxy to define relative SG yields. The uniform harvested SG yield per acre is adjusted based on relative differences in the hay yields for each of the nine US farm resource regions. Using hay crop yield allows accounting for the heterogeneity in crop yields across regions. The biomass acres are estimated as follows. First, total number of biomass acres required to meet the RFS mandate is estimated using a uniform yield of 2.69 tons per acre and an ethanol conversion rate of 96.5 gallons per ton of biomass. The ethanol conversion rate is the average of the switchgrass to ethanol conversion estimates reported in the literature. The literature used to derive the ethanol conversion estimate can be found in [22]. Next, the percentage relationship of regional cropland to the total US cropland over the nine US farm resource regions is used as a criterion to allocate the required biomass acres across regions. The region's share in total cropland criterion provides the percentage of biomass acreage for each region and that region's share of production of annual cellulosic ethanol mandate. Finally, the acres across regions are then adjusted based on relative SG yields estimated using the regional hay yields. Since the base case analysis assumes no return of expired CRP land to crop production, the acres needed for biomass crop production are obtained by replacing the traditional crop acres. The remainder of the land in each region adjusts across crops based on their relative net profitability. Such acreage shifts affect supplies of traditional crops and consequently their prices. Fertilizer use and price effects of biomass production are also an output of AGSIM. However, AGSIM does not allow incorporating the fertilizer demand relative to the expected yield level of the biomass crops on a regional basis. Hence, fertilizer requirements corresponding to a yield level of 3.0 1 tons per acre are used, an average of adjusted SG yields across all nine US farm resource regions. Nitrogen (N), phosphorous (P), and potassium (K) fertilizers are used in 3 : 1 : 2 proportions, resulting in 60 lb N, 20 lb P, and 40 lb K being applied per acre for SG production in each region. Due to the unavailability of national data on SG yields, prices, and other costs, the present version of AGSIM does not account for the net farm income associated with SG production. Hence, the producer surplus estimates only represent the net income associated with major crops. Results and Discussion The aggregate economic effects of production of the biofuels scenarios are measured relative to a zero-production level of biofuels (Baseline). Included in the implications are changes in cropping patterns of major crops, effects on commodity prices, impact on fertilizer prices, trade balance, and consumer and producer surplus. The results presented herein are for year 2022 after meeting the RFS mandates. Cropping Pattern Implications. Implementation of biomass crop production as a feedstock for energy influenced the cropping pattern in all nine of the US farm resource regions. Table 2 is a presentation of the estimates that indicate the average changes in major crop acres, with negative numbers indicating a decrease in crop acres. Due to the "no-CRP acres availability" assumption, the resulting required biomass acres in each region originate from traditional crop production acres. The remainder of the land adjusts across crops based on their relative net profitability. Corn acres decreased the most under the cellulosic-only ethanol mandate (i.e., 0 + 16; 0 + 20) but their decrease was relatively smaller under the grain and cellulosic ethanol mandate scenarios (i.e., 16 + 16; 16 + 20) (Table 2). Soybean acres decreased relatively higher under all biofuel scenarios 2 . Such shifts in crop acres are expected to influence the supplies of crops and consequently their prices. Impacts on Crop and Fertilizer Prices. As farmers respond to the biofuel mandates, crop production levels and prices adjust. The shifts in traditional crop acres to SG production are projected to result in a reduction in supplies of the major crops and an eventual increase in prices for all crops (Table 3). Although corn and soybean account for more than 90 percent of the ethanol and biodiesel production in USA, the effects of the RFS mandate are reflected in acres and consequently in the prices of all crops. Crop prices increased across all scenarios. Corn and wheat price increased the most, by approximately 50 percent. Such price increases can affect the quantity supplied (i.e., price elasticity of supply); but the biofuel obligation along with no acres available from CRP forces a balance across available land. Biomass crops such as SG are described as having relatively low-input requirements on a per acre basis [23], compared to crops such as corn. Biomass production resulted in cropping pattern shifts such that there are fewer acres in input-intensive crops, causing a decrease in total use of primary plant nutrients and, in turn, a decrease in fertilizer prices (Table 3). Sensitivity analyses with yield levels for SG of 5.0 and 7.0 tons per acre provide additional insights relative to the fertilizer implications for the combined effects of higher yield levels of SG and effects of biofuel production mandates. Availability of additional land for crop production by relaxing the assumption of expired CRP land potentially returning to crop production is also examined below. Welfare Implications. The estimated aggregate economic effects of producing cellulosic feedstock for energy are presented in Table 4. The effects indicate a loss in economic well-being in the food sector due to high commodity prices. For example, RFS mandate production of 36 billion gallons resulted in an increase in net farm income by $49.7 billion due to higher crop prices. This increase is more than offset by a loss in consumer surplus of $55.9 billion (from crop price increases). Similarly, the US trade balance, a measure of net exports, decreased compared to the baseline. The production of a cellulosic mandate in the AGSIM model occurs by displacing major crop acres across all US farm resource regions. Economic adjustments for all the major crops, especially on the supply side of the market, have been accounted. Due to biofuel production mandates, supplies of major crops decreased due to increased domestic demand, consequently reducing net exports. These results illustrate the impact of a government mandate for production of biofuels on producers and consumers. It is these types of unanticipated consequences that policy makers need to consider before enacting policies to address one issue, such as energy. Furthermore, the 36 billion gallons of ethanol is not a net addition to the fuel supply. Accounting for the fossil fuels for production and conversion of the ethanol, the net addition to the fuel supply from production of 36 billion gallons of ethanol is approximately 7.5 billion gallons [24]. Our results are different from studies that have analyzed similar policies. In our analysis, using AGSIM, both demand and supply dynamics have been incorporated including the corn export demand relationship. Fertilizer price adjustments within the domestic market are accounted in total surplus estimations. Any or all of these differences in the analysis framework can lead to different results in comparison with studies that evaluated national biofuel policies [17]. Sensitivity Analyses. Due to numerous unknowns related to biofuels, including the potential of converting CRP lands to crop production and conflicting information on SG yields, an expanded set of AGSIM applications (i.e., sensitivity analyses) are conducted across a wide array of assumptions. The results from these sensitivity analyses provide added insights as to the potential economic impacts of biofuel production. Sensitivity Analyses: Conservation Reserve Program Acres. The potential return of CRP acres to cropland assumes no additional costs to convert CRP land to cropland. Data regarding expiring CRP contracts currently under grassland and trees by region and by year are available from the Farm Service Agency. There are 28 million acres of grasslands in CRP that could potentially return to crop production. A series of sensitivity analyses includes allowing first 25, then 50, and finally 100 percent of the expiring grassland CRP Economics Research International 5 Table 3: Average percentage change in major crop and fertilizer prices from baseline due to biofuel production mandate across the US farm resource regions under specified biofuel scenarios and no conservation reserve program acres availability assumption. acreage (i.e., 7, 14, and 28 million acres) to return to biomass and/or crop production by region and by year. Crop acreage shifts are variable for the alternative CRP acres returning to production. The resulting impacts are presented in Table 5. The associated relative changes in economic impacts are compared to the solution of 16 + 16, no-CRP acres availability scenario. This analysis assumes the same productivity as non-CRP land. Within the operation of AGSIM, there are many economic principles, including the response of price to a change in supply, typically referred to as price elasticity of supply. A relatively smaller percentage decrease in crop prices due to addition of CRP acres to cropland (Table 5) suggests a relatively inelastic supply. Such an effect indicates change in demand rather than supply is a major factor affecting crop prices. For example, corn price that increased by 41.2 percent (Table 3) for the 16 + 16 scenario (effect of demand) in the benchmark analyses decreased by only 12.8 percent (Table 5) for the scenario of addition of 28 million acres of CRP land to crop production (effect of supply). Much the same as crop price changes, fertilizer prices doubled when CRP acres doubled from 14.0 to 28.0 million acres. With more land, conventional wisdom suggests that farmers would be better off financially. However, due to price elasticity of demand, the lower commodity prices more than offset the increase in acres, resulting in a lower net income compared to biomass production under the "no-CRP acres availability" assumption. However, the loss in net income is offset partially by a higher consumer surplus due to lower commodity prices. The total economic surplus due to the addition of 28 million CRP acres increased by 3.2 percent. The availability of additional land from CRP toward crop production resulted in more land available to increase production acres of traditional crops; consequently, domestic supplies of these crops increase. With supplies exceeding domestic demand, net exports increase. The net exports that were initially negative have subsequently become positive because of increased domestic supply. These economic impacts are suggestive that competition from biomass production for available land is partially offset by the CRP grassland returning to crop production. Not included in this analysis are the potential environmental impacts of converting marginal and erosion prone land to crops. Such phenomena represent externalities not quantified herein. Estimation of the costs of mitigation of water quality impacts of energy crop production showed substantial costs to internalize the water pollution externality [11]. Sensitivity Analyses: Switchgrass Yields. Efforts toward developing high-yielding biomass varieties could improve the yield potential, resulting in fewer biomass acres being required to meet the cellulosic biofuel mandate. Such a decrease in biomass acres would reduce competition for land with traditional crops, thereby influencing crop prices, fertilizer prices, and consequently welfare measures. Hence, biomass acreage requirements are reestimated using SG yields Table 5: Average percentage change in crop prices, fertilizer prices, and overall welfare impacts from (a) conservation reserve program land in grasslands (million acres) returning to crop production a and (b) higher switchgrass yields but no conservation reserve program land returning to crop production. Economics Research International 7 of 5.0 and 7.0 dry tons per acre. Researchers from Oakridge National Laboratories and Dartmouth College compiled observations on SG yields across 17 states and reported average yields of 5.0 to 7.0 tons per acre for lowland and upland ecotypes, respectively [25]. The uniform per acre yields of SG are adjusted by region based on relative differences in regional hay yields to account for the heterogeneity in crop yields across US farm resource regions. The results from the SG yields sensitivity analyses are particularly important for justification against any implication that relatively low yields are assumed for the analysis. The impacts of higher SG yields can be interpreted similar to the availability of additional land from relaxation of the no-CRP acres availability assumption. Table 5 includes the results of SG yields sensitivity analyses, which are compared to the solution of "16 + 16, no-CRP acres availability" scenario. AGSIM allocates total cropland based on crops' relative net profitability. The availability of high-yielding biomass crops is evaluated as an increase (shift) in supply of land. With biofuel demand held constant, the additional acreage available due to increases in SG yields and consequent decreases in cropland required for biomass production shifts the supply curves of the crops to the right, resulting in decreases in crop prices. The results indicate a trend of decreasing crop prices across all major crops due to increases in supplies. Fertilizer prices increase under higher biomass yield scenarios. Increases in crop acreages of traditional crops that are relatively fertilizer intensive compared to biomass crops result in fertilizer demand increases and eventually higher fertilizer prices. Welfare effects indicate that farm income decreased due to lower crop prices. Due to the price elasticity of demand, decreases in crop prices offset the increase in supplies, resulting in lower net income. However, decreases in crop prices produced a slight increase in consumer surplus. These results are potentially useful to the public's understanding of the economic consequences associated with a bioenergy policy even with potential of added cropland and greater per acre biomass yields. Conclusion Possible domestic and international economic impacts potentially resulting from the US cellulosic biofuel mandates reflected in the RFS are estimated. A dedicated cellulosic feedstock, switchgrass, is considered as a feedstock. Economic implications of additional land currently in conservation use returning to crop production and reducing competition for existing cropland are analyzed. Substantial increases in crop prices because of the biofuel mandate were observed across all scenarios, whereas prices of major plant nutrients decreased. Higher commodity prices resulted in loss in consumer surplus. The aggregate economic effects indicate a negative total economic surplus. The trade balance, a measure of net exports, decreased due to reduced supplies and increased domestic demand of major crops from the biofuel sector. The current analyses of evaluation of firstand-second-generation biofuels produced results analogous to [3], that is, increases in crop and fertilizer prices and a decrease in total economic surplus. The results indicate that the present biofuel policies are associated with large costs to consumers in terms of increased commodity prices. These price increases can be expected to affect lower income society more severely. Thus, there is a need to identify and consider those sectors most impacted by energy and other policy decisions. Alternatively, the beneficiaries are the agricultural producers whose net income is projected to increase. Sensitivity analyses assuming CRP grassland acres returning to crop production and higher biomass yields produced similar impacts. In addition, there are potential environmental impacts of production of dedicated biomass crops on marginal lands, which are not incorporated into the total surplus estimation. While the production of biomass for ethanol is pushed as a future energy solution, there are unexpected consequences of bioenergy policy that are often ignored. The results presented in this paper represent a robust set of expected shifts and economic impacts. The results suggest a need for policy makers to be informed and warrant identifying and considering multiple alternative energy sources to achieve a sustainable energy goal. Reductions in consumer surplus evolve due to price increases for commodities. The results of this study are obviously influenced by a number of factors and assumptions, but they also provide significant insights into the impact of cellulosic biofuels on the economy. Some of the limitations of this study are important to consider in future research to improve the analyses. The limitations include the following: (i) It is assumed that a dedicated cellulosic crop competes directly with existing cropland, while there are other sources of cellulosic feedstocks such as timber and hay that could be considered. (ii) The data on biomass crops relative to conversion to fuel are premature. A consistent, science-based estimate on specific biomass types conversion coefficients would be useful in providing better estimates of the aggregate welfare impacts. (iii) The model does not capture the effect of future developments or technology change in the USA and in the rest of the world that could affect the US food sector. (iv) Net farm income associated with the biomass production is not accounted for in the economic impacts estimation, mainly due to unavailability of data on national SG yields, prices, and costs. Availability of such data would help to identify better estimates of total economic surplus. (v) Economic costs of mitigation of externalities water quality deterioration, soil erosion, and greenhouse gas emissions because of production on marginal lands are not included. (vi) The value of having mobile fuels may override many of the impacts described in this study. The issues of form and place are not considered. However, it is 8 Economics Research International important to consider the potential of an alternative fuel not only from an energy perspective, but also from an economic perspective. Often times, however, economic approaches are distorted by government intervention through subsidies, tariffs, and so forth. Potential benefits of an increase in mobile fuels with a lower per gallon price were not included in the analysis but are deemed worthy of being included in future research. Disclosure Views, opinions, and results presented in the paper are of the authors and do not necessarily represent those held by the authors' current or past employers.
6,238.4
2016-02-28T00:00:00.000
[ "Economics" ]
Analytical Solutions for the Elastic Circular Rod Nonlinear Wave , Boussinesq , and Dispersive Long Wave Equations The solving processes of the homogeneous balance method, Jacobi elliptic function expansion method, fixed point method, and modified mapping method are introduced in this paper. By using four different methods, the exact solutions of nonlinear wave equation of a finite deformation elastic circular rod, Boussinesq equations and dispersive long wave equations are studied. In the discussion, the more physical specifications of these nonlinear equations, have been identified and the results indicated that these methods (especially the fixed point method) can be used to solve other similar nonlinear wave equations. Introduction Nonlinear partial differential equations are widely used to describe complex phenomena in various fields of sciences such as biology, chemistry, communication, and especially many branches of physics like condensed matter physics, field theory, fluid dynamics, plasma physics, and optics, and so forth.In this paper, we consider nonlinear wave equation of a finite deformation elastic circular rod, Boussinesq equations, and dispersive long wave equations with same physical behavior.The elastic circular rod is one important component in the structures, in which the dynamics of these components are governed by double nonlinearity and double dispersion wave equation [1].The Boussinesq equation, which was first introduced in 1871, arises in several physical applications.The dynamics of shallow water waves, which are seen in various places like sea beaches, lakes, and rivers, are governed by the Boussinesq equation.In recent years there has been much interest in some variants of the Boussinesq systems [2][3][4][5][6].These coupled Boussinesq equations [7] arise in shallow water waves for two-layered fluid flow.This situation occurs when there is an accidental oil spill from a ship which results in a layer of oil floating above the layer of water.The (2 + 1)dimensional dispersive long wave equations [8,9] were first derived by Boiti as a compatibility condition for a "weak" Lax pair.A good understanding of the solutions for these equations is very helpful to coastal and civil engineers in applying the nonlinear water model to coastal harbor design. In the field of nonlinear science to find exact solutions for a nonlinear system is one of the most fundamental and significant studies.The evaluation of exact solutions of nonlinear wave has complicated nonlinear defects; such equations are often very difficult to be solved.Although intensive investigations have made significant progress in recent years, many methods have been proposed to construct exact solutions, such as Weierstrass elliptic function method, the homogeneous balance method, sine-cosine method, the nonlinear transformation method, the hyperbolic tangent functions finite expansion, improved mapping approach, and further extended tanh method. The homogeneous balance method is a powerful tool to find solitary wave solutions of nonlinear partial differential equations.Fan introduced the homogeneous balance method into the search for Bäcklund transformations and obtained more solutions [10]. The mapping method is a very effective direct method to construct exact solutions of nonlinear equations.Zhang et al. make use of the auxiliary equation and the expanded mapping methods to find the new exact periodic solutions for (2 + 1)-dimensional dispersive long wave equations [11]. Journal of Applied Mathematics The basic idea of the fixed point methods consists in finding an iteration function, which generates successive approximations to the solution [12]. Many periodic solutions have been recently expressed in terms of various Jacobi-elliptic functions for a wide class of nonlinear evolution equations, which have been obtained by means of Jacobi elliptic expansion method [13]. The longitudinal wave equation of a finite deformation elastic circular rod, Boussinesq equations, and dispersive long wave equations are nonlinear partial differential equations of different scientific field.We find that they have the same characteristics.In this paper, the analytical solutions of the differential equations for the elastic circular rod, Boussinesq equations, and dispersive long wave equations are solved by using homogeneous balance method, Jacobi elliptic function method, fixed point method, and modified mapping method.The more physical specifications of these nonlinear equations have been identified and the results indicated. Three Types of Nonlinear Wave Equations The longitudinal wave equation of a finite deformation elastic circular rod is [1] where 0 = √/ is the longitudinal wave velocity for a linear elastic rod and 1 = √/ is the shear wave velocity. is the density of the material.] is the Poisson ratio. is the diameter of bar. is Yong's modulus of material. is the elastic shear modulus of material. is the time variable.Make the traveling wave transformation Substituting (2) into (1) and integrating it with respect to twice, we can obtain where is the second integral constant.The first integral constant is zero and , . (2 + 1)-dimension dispersive long wave equations were discussed in [8,9].They can be written as follows: Suppose that the traveling wave solutions for (8) can be given in the following forms: Substituting ( 9) into (8), we have where 1 is the second integral constant.Obviously, the first formula of (10) and (3) have the same characteristics.On the other hand, the solutions of ( 10) and ( 7) are same with each other except for the coefficients.Therefore, in following parts, the solutions of (5) are presented. Homogeneous Balance Method In [14,15] the homogeneous balance method is used to solve nonlinear wave equations.For example, the following equation was solved in [15]: The exact solution of the following corresponding equation was given by using the same method: To solve (5), the homogeneous balance method was improved in this paper.Now, suppose that where (), (), and (, ) are the undetermined functions and , are undetermined constants.Substituting ( 13) into ( 5), the equations can be rewritten as follows: where 2 = , 3 = , 4 = .In ( 14), let Integrating the second formula of ( 15), we have Furthermore, we can obtain Substituting (17) into the first formula of (15), we have Integrating ( 18), we have We know the solution of ( 19) is Obviously, we have Substituting ( 17) and ( 21) into ( 14), the equations can be rewritten as follows: Letting all coefficients of in (22) be zero, we have From first and fifth second formula of (23), it can be obtained that where () is undetermined function and () is arbitrary function. Jacobi Elliptic Function Method In [16], the NLS equation and Zakharov equation were studied by using the Jacobi elliptic function method.In this paper, (1) and ( 5) are discussed by using this method. Jacobi Elliptic Sine Function Method. Let We know where cn and dn are the Jacobi elliptic cosine function and the third type Jacobi elliptic function, respectively, and (0 < < 1) is module.Substituting (27) into (3), we have Let the coefficients of all derivatives of sn be zero and we have Therefore the solution of ( 3) is where 0 , 1 , and are denoted by (30), and is arbitrary constant. Thus, the solutions of ( 5) are (32) and the following formula: Jacobi Elliptic Cosine Function Method. Let Thus Substituting (35) into (3), we can obtain (36) Let the coefficients of all derivatives of cn be zero and we have Therefore, the solution of (3) is where 0 , 1 and are denoted by (37), and is arbitrary constant. Third Kind of Jacobi Elliptic Function Method. Let Thus Substituting them into (3), we can obtain Letting the coefficients of all derivatives of dn be zero, we have Therefore the solution of ( 3) is where 0 , 1 , and are denoted by (44), and is arbitrary constant. Fixed Point Method In general, fixed point theory can be divided into two types.The first type is only used to discuss the existence of solution. The second type is used not only to discuss the existence and uniqueness of solution but also to search the fixed point.We are more interested in the second type. Integrating the above formula with respect to and using the formula in [12], we have where 2 is integral constant. Obviously, the fixed point of the operator defined by (61) is equivalent to the solution of (3).We denote 6 (, ) = () . (65) By using the classical iteration algorithms, such as Mann iteration method, Ishikawa iteration method, and Noor iteration method, the solutions can be obtained.Then, the traveling wave solutions of ( 5) are (65) and the following formula: (66) Modified Mapping Method The modified mapping method was introduced and the nonlinear evolution equation was solved in [21,22].In [22], several hundreds of Jacobi elliptic function expansion solutions were obtained.We also use this method in this paper.Let where = + 3 ⇔ 2 = + 2 + (1/2) 4 ., , and are constants.Thus Substituting ( 68) into (3), we have Notice that we get the same algebraic equations when the coefficients of , −1 and 2 , −2 in (69) are zero.Therefore there are only five algebra equations.Solve them and we have In order to make 1 , 1 be real number, and should be different signs with 3 . Summary and Conclusions In summary, by using different methods, nonlinear wave equation of a finite deformation elastic circular rod, Boussinesq equations, and dispersive long wave equations are solved in this paper, and several analytical solutions are obtained.Kink soliton solutions are obtained when we use the homogenous balance method.The Jacobi elliptic function method is utilized and periodic solutions are obtained.When → 1, the solutions reduce to solitary solution.Their wave forms are the bell type, kink type, exotic type, and peakon type.By using the modified mapping method, peakon solutions of (3), ( 5), ( 8) are obtained.On the other hand, after proving the existence and uniqueness of fixed point of operator , we can use the classical iteration algorithms to get the solutions.The method has wide application and can be used to solve other nonlinear wave equations.Fixed point method may be significant and important for the explanation of some special physical problems whose analytical solution cannot be obtained.The results indicate the following.First, the natural phenomena and the physical properties of the equations are abundant, which should be further discovered.Second, there are many kinds of methods for solving these equations.Third, the arbitrary constant can be determined by the initial and boundary conditions and we can obtain the exact solutions for certain engineering or scientific problems.It is shown that the methods, proposed in this paper for three types of nonlinear wave equations, are feasible for determining exact solutions of other nonlinear equations.
2,412.2
2014-04-22T00:00:00.000
[ "Engineering", "Physics" ]
De-crystallization of Uric Acid Crystals in Synovial Fluid Using Gold Colloids and Microwave Heating In this study, we demonstrated a unique application of our Metal-Assisted and Microwave-Accelerated Evaporative Crystallization (MA-MAEC) technique for the de-crystallization of uric acid crystals, which causes gout in humans when monosodium urate crystals accumulate in the synovial fluid found in the joints of bones. Given the shortcomings of the existing treatments for gout, we investigated whether the MA-MAEC technique can offer an alternative solution to the treatment of gout. Our technique is based on the use of metal nanoparticles (i.e., gold colloids) with low microwave heating to accelerate the de-crystallization process. In this regard, we employed a two-step process; (i) crystallization of uric acid on glass slides, which act as a solid platform to mimic a bone, (ii) de-crystallization of uric acid crystals on glass slides with the addition of gold colloids and low power microwave heating, which act as “nano-bullets” when microwave heated in a solution. We observed that the size and number of the uric acid crystals were reduced by >60% within 10 minutes of low power microwave heating. In addition, the use of gold colloids without microwave heating (i.e. control experiment) did not result in the de-crystallization of the uric acid crystals, which proves the utility of our MA-MAEC technique in the de-crystallization of uric acid. Introduction Gout disease, which is also known as hyperuricemia, occurs when uric acid (i.e., monosodium urate) crystals are formed in the joints of bone due to the accumulation of uric acid present in the blood. The crystallization of uric acid occurs due to the following reasons: (i) when the body increases the amount of uric acid it produces, (ii) the kidneys fail to excrete enough uric acid, and (iii) dietary habits, such as, consumption of food with high purine content and/or excessive alcoholic beverages, when a person has had an organ transplant, being overweight, or weak metabolism to break down purines. Gout is known to affect the hallux rigidus of the big toe with the symptoms of redness, stiffness and swelling of the big toe. In addition, other parts of the body (ankles, heels, knees, wrists, fingers, and elbows) can also be affected. There are several reported drugs for the treatment of gout, which include anti-inflammatory drugs (NSAIDs) [1], allopurinol [2], colchicine [3], and uricosuric agents [4]. NSAIDs are prescribed to patients because of their ability to reduce inflammation in the affected areas. Despite widespread use of the drugs for the treatment gout, their side-effects, such as stomach bleeding and ulcers, thinning bones, poor wound healing and decreased ability to fight infection pose a threat to human health. In this regard, there is still an urgent need for new methodologies for the treatment of gout while minimizing the risks of other bodily complications. Recently, the Aslan Research Group has developed a technique called Metal Assisted and Microwave Accelerated Evaporative Crystallization (MA-MAEC), in which organic and drug compounds achieve complete crystal growth in a fraction of the time when compared to conventional techniques [5][6][7][8]. The MA-MAEC technique is based on combined use of metal nanoparticles that are immobilized to solid surfaces and microwave heating, where a microwave-induced temperature gradient is created between the metal nanoparticles (cooler) and the solution (warmer) during microwave heating. As a result of microwave-induced temperature gradient, the drug compounds are forced to move from the warmer solution to the surface of cooler metal nanoparticles, where nucleation and crystallization processes occur. One can modify the MA-MAEC technique by the use of metal colloids in solution and immobilization of the crystals on to the solid surfaces (Scheme 1-Top), where microwaveinduced temperature gradient still exists. In this regard, metal colloids in solution are used for de-crystallization of uric acid crystals. Metal colloids in solution convert the microwave energy to kinetic energy to move about the uric acid solution for de-crystallization process, where the collisions between the metal colloids and uric acid result in the break down uric acid crystals (Scheme 1-Top). In this communication, we explore the use of gold colloids using our MA-MAEC technique for de-crystallization of uric acid. This was performed on a blank modified glass slide as our platform, where uric acid was crystallized and de-crystallized with the addition of gold colloids. The combined use of gold colloids and microwave heating resulted in the decrystallization of uric acid crystals (i.e., 60% reduction in number of uric acid crystals). On the other hand, the use of microwave heating and gold colloids separately or at room temperature experiments did not result in the de-crystallization of uric acid crystals, which proves the effectiveness of using gold colloids and microwave heating together. Materials Sulfuric acid and hydrogen peroxide purchased from Pharmco products Inc. Deionized water purified via a Millipore Direct Q 3 UV apparatus. Glass slides of 0.96 to 1.06 mm thickness purchased from Corning Incorporated. Uric acid and 20 nm gold colloids purchased from Sigma-Aldrich (USA, catalog number: 741965: ∼7.2×10 11 particles/mL). Silicon isolators composed of 12 wells (30 μL capacity) and targets (57 mm in diameter) purchased from Electron Microscopy Sciences. Methods Crystallization and de-crystallization of uric acid on blank glass slides-The standard glass microscope slides were cut into eight equal pieces, cleaned and submerged in freshly prepared piranha solution (3:1 Sulfuric Acid: Hydrogen Peroxide) for 10 minutes, followed by thorough rinse with deionized water and air dry process. Silicon isolators (2.0 mm deep and 4.5 mm diameter) were attached to one piece of the cut glass slides. 20 μL uric acid solution (prepared by mixing 10 mg of uric acid with 20 mL of deionized water) were added to each wells and allowed to crystallize at room temperature. After uric acid crystals were grown, 10 μL of bovine synovial fluid at room temperature (from Lampire Biological Laboratories) were added to the wells and four different experiments were carried out (Scheme 1-Bottom): ○ Experiment 1: uric acid crystals with gold colloids at room temperature, where 10 μL of gold colloids were incubated inside the wells at room temperature; ○ Experiment 2: uric acid crystals with gold colloids using microwave heating, where 10 μL of gold colloids were incubated inside the wells using microwave heating (700 W output kitchen microwave, power level 1); ○ Control 1: uric acid crystals without gold colloids at room temperature, where 10 μL of deionized water without gold colloids were incubated inside the wells at room temperature; ○ Control 2: uric acid crystals without gold colloids using microwave heating, 10 μL of deionized water without gold colloids were incubated inside the wells using microwave heating. Optical images of the crystals were taken at one minute increments with an optical microscope to observe de-crystallization uric acid crystals (i.e., the samples are taken out of the microwave for 30 sec). The number and size of uric acid crystals were monitored using Motic software. Figure 1 shows the comparison of optical images of uric acid crystals with and without gold colloids incubated at room temperature and using microwave heating. In experiment 1 and Control 1, the size and the number of uric acid crystals remained the same after 10 minutes of incubation. The same observation was also made when gold colloids were incubated with uric acid crystals with microwave heating (Control 2). On the other hand, the use of gold colloids with microwave heating (Experiment 2) resulted in a significant reduction in the number and size of uric acid crystals after 10 minutes. Figure 2 shows the higher resolution optical images of uric acid crystals used in Experiment 2, which reveal that the initial size of the uric acid crystals (t=0 min) was 25±16 μm. The smaller uric acid crystals also appeared to be in an isolated form and aggregated form. However, after the addition of gold colloids and exposure to microwave heating, the number and size of the uric acid crystals were significantly reduced. Figure 3 shows the time progression of uric acid crystals with the addition of gold colloids and microwave heating for 10 minutes. Figure 3 also shows that the number of uric acid crystals was significantly reduced after 4 minutes of microwave heating. The average number of uric acid crystals in the beginning of each experiment used in this study was 70±10. Results and Discussion In order to assess the effect of the use of gold colloids and microwave heating on uric acid crystals quantitatively, the percentage retention value of uric crystals in all experiments were calculated by dividing the number of crystals at any observation time by the initial number of uric acid crystals and shown on a scale 0 to 1 (Fig. 4). The percentage retention value of uric acid crystals without gold colloids at room temperature (Control 1) remained the same for 10 minutes, which indicated that uric acid crystals did not dissolve in synovial fluid. The incubation of uric acid crystals with the addition of gold colloids at room temperature (Experiment 1) showed a 5% decline in the percentage retention value, which implies that gold colloids in solution results in the de-crystallization of uric acid crystals. When uric acid crystals are exposed to microwave heating without gold colloids (Control 2), the percentage retention value is increased by 15% initially and decreased to the initial level thereafter, which can be attributed to the breakage of larger uric acid crystals into smaller ones (data not shown) and to the partial de-crystallization processes. When uric acid crystals are exposed to microwave heating after the addition of gold colloids (Experiment 2), the percentage retention value is decreased by 40% after 4 minutes and 60% after 10 minutes of microwave heating. The average number and size of uric acid crystals at the end of Experiment 2 was 30±5 and 19±10 micrometers, respectively. It is important to comment on the mechanism behind microwave heating with gold colloids in the de-crystallization process and its comparison to de-crystallization at room temperature with gold colloids. The collision events between the gold colloids present in solution with the uric acid crystals on the glass surface are increased due to an increase in kinetic energy of gold colloids when exposed to microwave heating [9]. The collision events between gold colloids and uric acid crystals at room temperature are expected to be significantly less due the slow diffusion rates of gold colloids in solution [9] as compared to those exposed to microwave heating. In addition, the number of gold colloids (~10 12 particles/mL, typical of chemically synthesized gold colloids) is significantly larger than the number of uric acid crystals (ca. 70 in this study), which results in greater collisions that breakdown and ultimately reduces the size of the uric acid crystals. We note that in our experiments the temperature of the synovial fluid was 24°C and we did not attempt to measure the temperature change during the exposure to microwave heating in this rapid communication. Based on our previous work, we can report that the temperature of the synovial fluid does not exceed 30°C after 1 minute of microwave heating at power level 1 (i.e., duty cycle of 3 sec) [7,10]. The detailed investigation of de-crystallization of uric acid crystals is underway and will be reported in due course. Conclusions In this work, we demonstrated the de-crystallization of uric acid crystals in the presence of gold colloids and low power microwave heating. In order to simulate the conditions in the human body, first, uric acid crystals were grown on glass slides (a model bone surface). Subsequently, gold colloids in synovial fluid was added to the glass slides with uric acid crystals and were exposed to microwave heating or incubated at room temperature. The exposure of uric acid crystals to microwave heating in the presence of gold colloids resulted in up to 60% reduction in the number of uric acid crystals. On the other hand, the use of colloids at room temperature, without microwave heating or no use of gold colloids did not affect the number and size of the uric acids crystals. The observations in this report demonstrate that the combined use of gold colloids and microwave heating can provide a framework for the further development of the proposed technique for future in vivo studies. Optical images of uric acid crystals grown on glass slides with and without gold colloids at both room temperature and microwave heating after 10 minutes of incubation. Scheme 1. (Top) Shows the depiction of the de-crystallization of uric acid crystals with gold colloids and control sample (without gold colloids). (Bottom) Experimental procedures used in this study.
2,914.8
2015-01-23T00:00:00.000
[ "Materials Science", "Medicine" ]
Entering the Augmented Era: Immersive and Interactive Virtual Reality for Battery Education and Research We present a series of innovative serious games we develop since four years using Virtual Reality (VR) technology to teach battery concepts at the University (from undergraduate to doctorate levels) and also to the general public in the context of science festivals and other events. These serious games allow interacting with battery materials, electrodes and cells in an immersive way. They allow experiencing impossible situations in real life, such as building with hands battery active material crystal structures at the nanometer scale, flying inside battery composite electrodes to calculate their geometrical tortuosities at the micrometer scale, experiencing the electrochemical behavior of different battery types by driving an electric vehicle and interacting with a virtual smart electrical grid impacted by 3D-printed devices operated from the real world. Such serious games embed mathematical models with different levels of complexity representing the physical processes at different scales. We describe the technical characteristics of our VR serious games and their teaching goals, and we provide some discussion about their impact on the motivation, engagement and learning following four years of experimentation with them. Finally, we discuss why our VR serious games have also the potential to pave the way towards an augmented era in the battery field by supporting the R&D activities carried out by scientists and engineers. 1. Energy storage and virtual reality Energy storage is one of the most prominent challenges Humanity has to face in the 21 century. This is mainly due to the increasing use of smart grids and of intermittent renewable energies driven by the limitation of fossil fuels and the climate change. Electrochemical energy storage technologies such as 2 lithium-based rechargeable batteries are called to play a major role to address this challenge, due to the simplicity of their overall operation principles and high energy densities. Since their first commercialization in 1991 by Sony, Lithium Ion Batteries (LIBs) have triggered the emergence of a wide spectrum of portable devices and they are nowadays the hobbyhorse of the renaissance of the Electric Vehicles (EVs). The desired massive electrification of the transportation sector still requires better rechargeable battery technologies than current LIBs in terms of specific energy, cost, recyclability and safety. To make this happen, the design of advanced LIBs with a new generation of electrode materials and smart functionalities (e.g. embedded state of health sensors and self-healing actuators) is required. Other lithium-based rechargeable battery technologies have emerged at the laboratory scale and have been the focus of intense studies in the last decades such as Lithium Sulfur Batteries (LSBs) and Lithium Oxygen Batteries (LOBs) due to their higher theoretical specific energies than the ones achievable in current LIBs. However, significant technical challenges still persist in LSBs and LOBs, such as the premature capacity fading in the former and the severe instability of the electrolyte and poor rechargeability in the latter. In general, any scientific approach adopted to try to overcome the aforementioned technical challenges as well as to design and optimize any kind of electrochemical energy storage device, requires transdisciplinary efforts, encompassing at least materials science, chemistry, physics and engineering. Indeed, lithiumbased batteries are made of multiple materials and their operation implicate numerous physicochemical mechanisms occurring simultaneously at multiple spatial scales. To successfully implement the required transdisciplinary approaches and to favor the invention of disruptive energy storage technologies, it is of paramount importance to encourage the emergence of tools that can ease the inspiration by the current and the future generations of battery scientists. Virtual Reality (VR) technology constitutes one of such tools because it can ease training, education and stimulate creativity in research. VR environments were adopted very early for training in abstract or complex situations. They have been revealed very useful for training in the cases when the possibility to perform in vivo situation is either difficult, expensive or dangerous (e.g. in spatial, military and nuclear areas). The immersive experience in an environment inspired by physically realistic phenomena in which it is possible to interact represents an important asset of VR: this helps users at developing an intuitive knowledge about the simulated system. Thanks to the recent technical progresses, VR hardware has become more compact and accessible to much wider audiences: Oculus and HTC Vive are examples of commercial VR hardware. These technologies have head-mounted displays that allow easy interaction with the virtual environment by using controllers equipped with motion tracking (Figure 1). Figure 1. a) Illustration of the HTC Vive VR hardware (adapted from Ref. [20]); b) overall principles (need of a head-mounted display, virtual environment in 360°, full immersion, full interaction using controllers); c) typical location of the base stations and the user (adapted from Ref. [20]); d) a VR user in action. Abstract: We present a series of innovative serious games we develop since four years using Virtual Reality (VR) technology to teach battery concepts at the University (from undergraduate to doctorate levels) and also to the general public in the context of science festivals and other events. These serious games allow interacting with battery materials, electrodes and cells in an immersive way. They allow experiencing impossible situations in real life, such as building with hands battery active material crystal structures at the nanometer scale, flying inside battery composite electrodes to calculate their geometrical tortuosities at the micrometer scale, experiencing the electrochemical behavior of different battery types by driving an electric vehicle and interacting with a virtual smart electrical grid impacted by 3D-printed devices operated from the real world. Such serious games embed mathematical models with different levels of complexity representing the physical processes at different scales. We describe the technical characteristics of our VR serious games and their teaching goals, and we provide some discussion about their impact on the motivation, engagement and learning following four years of experimentation with them. Finally, we discuss why our VR serious games have also the potential to pave the way towards an augmented era in the battery field by supporting the R&D activities carried out by scientists and engineers. Energy storage and virtual reality Energy storage is one of the most prominent challenges Humanity has to face in the 21 st century. [1] This is mainly due to the increasing use of smart grids and of intermittent renewable energies driven by the limitation of fossil fuels and the climate change. [2] Electrochemical energy storage technologies such as lithium-based rechargeable batteries are called to play a major role to address this challenge, due to the simplicity of their overall operation principles and high energy densities. Since their first commercialization in 1991 by Sony, Lithium Ion Batteries (LIBs) have triggered the emergence of a wide spectrum of portable devices and they are nowadays the hobbyhorse of the renaissance of the Electric Vehicles (EVs). [3][4][5] The desired massive electrification of the transportation sector still requires better rechargeable battery technologies than current LIBs in terms of specific energy, cost, recyclability and safety. To make this happen, the design of advanced LIBs with a new generation of electrode materials and smart functionalities (e.g. embedded state of health sensors and self-healing actuators) is required. [6] Other lithium-based rechargeable battery technologies have emerged at the laboratory scale and have been the focus of intense studies in the last decades such as Lithium Sulfur Batteries (LSBs) and Lithium Oxygen Batteries (LOBs) due to their higher theoretical specific energies than the ones achievable in current LIBs. [7,8] However, significant technical challenges still persist in LSBs and LOBs, such as the premature capacity fading in the former and the severe instability of the electrolyte and poor rechargeability in the latter. [9,10] In general, any scientific approach adopted to try to overcome the aforementioned technical challenges as well as to design and optimize any kind of electrochemical energy storage device, requires transdisciplinary efforts, encompassing at least materials science, chemistry, physics and engineering. Indeed, lithiumbased batteries are made of multiple materials and their operation implicate numerous physicochemical mechanisms occurring simultaneously at multiple spatial scales. [11] To successfully implement the required transdisciplinary approaches and to favor the invention of disruptive energy storage technologies, it is of paramount importance to encourage the emergence of tools that can ease the inspiration by the current and the future generations of battery scientists. Virtual Reality (VR) technology constitutes one of such tools because it can ease training, education and stimulate creativity in research. VR environments were adopted very early for training in abstract or complex situations. [12] They have been revealed very useful for training in the cases when the possibility to perform in vivo situation is either difficult, expensive or dangerous (e.g. in spatial, military and nuclear areas). [13][14][15][16][17] The immersive experience in an environment inspired by physically realistic phenomena in which it is possible to interact represents an important asset of VR: this helps users at developing an intuitive knowledge about the simulated system. [18] Thanks to the recent technical progresses, VR hardware has become more compact and accessible to much wider audiences: Oculus and HTC Vive are examples of commercial VR hardware. [19][20] These technologies have head-mounted displays that allow easy interaction with the virtual environment by using controllers equipped with motion tracking (Figure 1). In the recent years, VR technology started to be significantly used for educational and research purposes in hard sciences such as mathematics, physics and chemistry. [21][22][23][24][25] In chemistry, especially, researchers have proposed VR tools to visualize quantum mechanics and molecular dynamics simulation results or to teach experiments in immersive and interactive way. [26][27][28][29][30][31][32] In the battery field, we can think that VR has also strong potential to provide a fully immersive and interactive experience that can tremendously ease the understanding of concepts behind materials, components, cell and packs working principles. VR could be used to put the users in situations that are impossible in reality, such as manipulating and navigating inside a material of few micrometer size by interacting and by measuring the consequences of the interactions in real time. Surprisingly, despite these tremendous promises, the use of VR in the battery field has never been reported. Instead, the most used traditional way providing virtual experience in the battery field is computational modeling. Since more than 50 years computational models have revealed to be useful simulation tools to understand battery operation principles and to carry out their design and optimization. [11,33] Such models are based on mathematical equations describing physicochemical mechanisms which are solved by using informatics programs. Therefore, the models constitute virtual materials or batteries with which one can perform virtual experiments, such as investigating how operation conditions (e.g. applied current density) impact outcomes such as the capacity or the aging. Numerous models have been proposed under several geometric representations of the cells and the electrodes within, in one dimension (1D), two dimensions (2D) or three dimensions (3D). [11] However, the efficient development and use of these models require some programming skills. Moreover, the visualization of results arising from 3D models (e.g. spatial distribution of chemical species in the electrodes) is not trivial: images remain "confined" in 2D computer screens. This issue of 2D representation of 3D objects make that often students and battery researchers are confronted to difficulties in the abstraction and conceptualization in three dimensions of battery materials and components, such as the active material crystal structures at the nanoscale, the composite electrode porosity at the mesoscale and the theoretical operation principles of these batteries in real applications. Some educators and science communicators have used classical 3D glasses (like the ones to watch movies in theaters), [34] but these glasses do not permit individual or collective interaction with the objects and immersion into them. In this Concept, we report six VR serious games we started to develop four years ago, allowing students and researchers to interact in an immersive and realistic way with virtual battery materials, composite electrodes, battery cells in EVs and smart electrical grids, as well as with the mathematical models used in the VR environment simulations. With serious games [35][36] we refer here to games designed for learning the operation principles of rechargeable batteries, with the aim at rising the motivation, the engagement and the learning efficiency by students and by the general public, and also to stimulate the creativity of battery researchers. Our pedagogy activities using these VR serious games were recently recognized with one of the French National Prizes for Pedagogy Innovation in 2019 (PEPS 2019). [37][38][39][40] The goal of this Concept is to present the main characteristics and pedagogical goals of these VR serious games, as well as some illustrations of their utilization in the last four years. Some discussion about the impact on the motivation and learning by students and other users in general is also provided, without the intention to share here detailed ergonomic and psychological assessments in view of the readership of this Journal. Such assessments are being shared by us elsewhere (see for instance Ref. [41]). We also discuss the potentialities of these VR tools to boost battery R&D. Battery virtual reality serious games All our VR serious games were coded using Unity programming language and Revia ® technology. [ The Crystal VR serious game, developed by us in 2019, addresses the structural (crystallographic) properties of the active materials. [43] Teaching crystallography to students often faces a dilemma. How to draw in two dimensions, symmetry operations and crystal structures which are by nature three-dimensional? Of course, perspective drawings are used, but still, some students and even battery researchers are not at all able to clearly see (and thus to understand) a 3D representation on a 2D plan. The crystal structure of any material is the periodic repeatability of a so-called "unit cell" in the three directions of space. This "unit cell" is constituted of atoms (same or different chemical nature) related by symmetry operations. If we remove the elements of symmetry, we obtain what is called the "asymmetric unit". Taking the example of table salt (NaCl), we can see that the "asymmetric unit" is made from only two atoms (one sodium atom and one chlorine atom depicted as yellow and green spheres respectively in Figure 3). However, adding symmetry operations, it turns out that the chlorine is actually surrounded by six sodium atoms in a very specific arrangement called octahedron in the "unit cell". Finally, all those "unit cells" are repeated in the three directions of space, forming in fine the salt crystal. In crystallography, symmetry operations are divided into four main categories: • Mirrors (labelled "m"); As mentioned earlier, explaining and visualizing the symmetry operations is not an easy task. As it can be seen on Figure 4, being able to differentiate a mirror and a rotation axis of order 2 (2π/2=180°) is not trivial, especially when using spheres (which are highly symmetrical objects). Another feature allows to change the chemical nature of the different atoms and also to build its own crystal structure from an empty unit cell. Both Crystal workshops are very versatile so that changes and improvement can be easily made. The Crystal Structure workshop has been developed as a tool to better visualize crystal structures. We can imagine creating exercises in which students will be asked to construct specific crystal structure types. They should select the proper unit cell, the proper atoms and element of symmetry in order to build the proper crystal structure. Another improvement could be the display of the X-ray diffraction pattern created from the selected (or created crystal structure). X-ray diffraction is used every day in laboratories working in the battery field. An X-ray source illuminates a sample, which reflects (diffracts) the X-ray beam. We obtain the so-called X-ray diffraction pattern consisting of a succession of peaks with different intensities, from which crystallographers can deduce the crystal structure of given material. The position of the peaks are related to the size of the "unit cell" described above and their intensities to the chemical nature and the position of the atoms into the "unit cell". But, it is also possible, knowing the exact crystal structure, to predict and calculate what the X-ray diffraction pattern should looks like. It will be indeed very interesting for students to be able to see such simulated patterns corresponding to the crystal structure they are looking at and observe how the pattern evolves when they change the chemical nature of the atoms, their positions and so on. Furthermore, battery researchers are really interested in being able to "fly" into crystals. It is possible to imagine an evolution of Crystal VR to see and follow the diffusion path that a Lithium or a Sodium ion is taking when migrating through the active material. This could allow researchers to better understand why a given electrode material is more efficient than another one and consequently, could help them to design new materials with enhanced properties. The effective diffusion coefficient of the lithium ions in the electrolyte is given by [46] (2) where the porosity ε is defined as the ratio between the volume occupied with electrolyte in the electrode and the volume of the electrode. Even if the geometrical tortuosity concept is easy to understand in two dimensions (Figure 7b), its understanding in three dimensions is not trivial at all as it requires imagining threedimensional paths for no discontinuous transport between one side and the other one of the electrodes. Numerous students and battery researchers have difficulties at imagining and visualizing such three-dimensional paths. This is why numerous theories exist aiming at calculating the geometrical tortuosity of electrodes as function of their porosity, the most famous one being the Bruggeman relation postulating that [47] √ (3) (1)) are also indicated. Lelectrode stands for the electrode thickness; b) traditional (bi-dimensional) way of illustrating the concept of "tortuous path" (arrow between points A and B). Note that the porous channel on the top has infinity tortuosity whereas the one at the bottom has lower tortuosity than the one in the middle. However, these theories assume the electrodes to have unrealistic ideal geometries. In the case of the Bruggeman relation above, it holds when the electrode is constituted by spherical particles and with low volume fraction. It has been demonstrated that other particle shapes lead to other mathematical expressions relating the geometrical tortuosity to the porosity. [48] Transmission line models have been used to assess the geometrical tortuosities of LIB electrodes from the fitting of their parameters to Electrochemical Impedance Spectra. [49] However, electrodes are tri-dimensional objects and therefore they have three geometrical tortuosity values, along the cartesian directions X, Y and Z. Very different values of these three tortuosities would mean a highly anisotropic electrode. Such anisotropy can lead to heterogeneities of the battery operation. [45] Evaluating such tortuosity anisotropy is of major importance. Commercial software such as GeoDict [50] and freeware like TauFactor [51] allow evaluating such tortuosity values by solving steady state Fick's law numerically and by comparing the solution to the analytical one. However, these numbers are averaged. We believed that designing a tool that can transform a student or a researcher in a "ion" moving in the electrolyte filling the pores can be really interesting to apprehend the anisotropic character of the tortuosity and better analyze its impact on the overall electrode operation principles. The Tortousity VR serious game proposes to the user to fly along the thickness of electrode mesostructures in order to calculate their geometrical tortuosities ( Figure 8). The user can choose among X, Y and Z directions by holding one of the HTC Vive controllers. At the beginning the user is facing the side "X" of the electrode. When the controller is placed on "Y" or "Z" circles, the electrode turns to face its "Y" or "Z" sides to the user. When ready, in order to start to fly, the user places the controller on "start". Once flying, when the user extends his/her arms he/she accelerates, while when bringing back the arms to his/her chest the user slows down. If the user touches the solid material constituting the electrode she/he will be back to the starting point with the possibility of choosing the same direction or another one to fly along. In order to find his/her way (in case of finding an impasse), the user has also the right to fly back (towards the starting point) but no longer than 10 seconds. Nanoviewer VR serious game Nanoviewer VR serious game was developed by us in 2017 to allow a user to manipulate in an immersive environment a wide diversity of digitalized battery-related objects such as electrolyte molecules, active material or solid electrolyte crystals and composite electrode mesostructures (Fig. 11). The goal here is to ease their visualization and assessment of asymmetries, anisotropies, and spatial organization of the materials in three dimensions. Nanoviewer VR is the precursor of Crystal VR and Tortuosity VR, but it allows manipulating any kind of digital object without other purpose than the visualization. Molecules and crystals can be originated from any kind of software for atomistic and molecular analysis and calculations, such as LAMMPS [52] or VESTA, [53] provided that the output files from these software are recorded in an atom-type file format. The composite electrode mesostructures can be originated from tomography characterizations or by using a physical model simulating their manufacturing process. The latter model, already reported by us, [54][55][56][57] is supported on a Coarse Grained Molecular Dynamics (CGMD) approach simulating LIB electrode slurries and the resulting electrode mesostructures upon the slurry solvent evaporation. By using CGMD, we have calculated several electrode mesostructures resulting from several formulations quantified by the weight ratio between LiNi1/3Mn1/3Co1/3O2 active material particles and carbon-binder, and the LiNi1/3Mn1/3Co1/3O2 particles size distribution. [54][55][56] Figure 11 shows • enlarge the digital object, using a movement of arm spacing, with both handles, to be able to "dive" inside the materials, composite electrodes, etc.; • shrink the material or composite electrode mesostructure, using a movement of arm spacing, always with both handles; • using a single handle, grab the material or electrode mesostructure to rotate it, move it in space, move it away or move it closer together. (Figure 13). The spatial organization of these materials in a composite electrode determine its practical properties: for instance, the surface area of contact between pores and active material determine the power density of the composite electrode. Indeed, as parts of active material covered by CBD will remain electrochemically less active towards Li insertion or de-insertion, despite CBD may contain some microporosity, the path for Li + to move through can be significantly tortuous. [56,58] Other practical properties of LIBs will be affected by such interfaces, such as the cell durability and the safety. [59] A student using the Nanoviewer VR serious game. What is seen in the VR environment is projected in a screen to allow others to see as well. The Great Li-Air Escapade VR serious game. As mentioned in the Introduction, LOBs, also called Lithium Air Batteries, have attracted a significant interest for electric transportation, in view of their high energy density and high theoretical capacity, even though the aforementioned electrolyte stability and rechargeability issues make them very far away from commercialization. [60] A typical LOB cell consists of an anode Li + , O2(sol) and LiO2(sol) in the formation mechanism of Li2O2. This includes also the pore wall passivation by Li2O2, pore volume clogging and electronic tunneling effects as described in Ref. [63]. The latter captures the fact that the passivating film of Li2O2 can still conduct electrons by quantum tunneling until reaches a thickness of 10 nm for which it becomes non-conductive. The kMC model also captures the heterogeneous Li2O2 formation along the pore channel length: Li2O2 forms closer to the O2 inlet, because typically O2 transport is slower in LOB electrolytes than Li + ions. [65][66] Examples of calculated Li2O2 distributions at two depths of discharge and for different pore radius are reported in Figure 17. It can be noticed that the pore with 10 nm radius becomes totally clogged at the end of discharge whereas larger pores not, because Li2O2 film is assumed to stop growing when it reaches 10 nm thickness. For the larger pores it is underlined that the film passivation leads to the full discharge of the LOB. The lookup table procedure is adopted because it is hard to implement real-time kMC simulations in the game without penalizing the overall computational cost of the VR environment. The lookup table has been built by running offline kMC simulations, which allows extracting the pore geometry evolution as function of discharge history and pore size. With employing this lookup The Great EV Escapade VR serious game The previously reported by us. [56,64,67] In the LSB case, the (carbonbased) cathode evolution representation is more complex ( Figure 21). In a LSB cathode upon cell discharge, solid S8 initially present in the cathode, first dissolves in the electrolyte, and then it is sequentially reduced to polysulfides S8 2-, S6 2-, S4 2-, S2 2and S 2-. Li2S2 and Li2S form and precipitate in the cathode pores dealing to their clogging and the pores walls passivation slowing down the S8 reduction kinetics. In the game, sulfur particles initially present in the cathode pores are represented as cubes in yellow color randomly localized, and the precipitates are represented as green spheres forming gradually and randomly locating on the carbon surface upon the LSB cell discharge. Pre-calculated lookup tables using kMC simulations can be used to describe sulfur particles dissolution, polysulfides formation and Li2S precipitation. [67] The Li + intercalation process in a LiNi1/3Mn1/3Co1/3O2 -based cathode upon the LIB discharge is represented by a gradually and spatially localized changing color in the cathode mesostructure (ranging from green -pristine cathode-towards red -fully intercalated electrode-). In the LOB case, the Li2O2 distribution in a carbon matrix-based cathode is arising from a random location algorithm. The Smart Grid MR serious game A smart grid is an electricity grid enhanced by information technology for interlinked and automated electricity generation, transmission, distribution and control. [68][69] It encompasses electricity sources (e.g. renewables), energy sinks (e.g. EV recharge stations, houses, industry) and stationary energy storage to ensure a continuous electricity provision when intermittent renewable energies are implicated ( Figure 24). A smart grid operation can be seen, within the scope of the game theory, as a collective game involving cooperation (e.g. ensemble of electricity generators) and competition (e.g. electricity provision vs. consumption), from which complex behavior emerges. [70][71] In order to ease the understanding by students of the interplay between electricity generation, distribution, storage and consumption, in 2019 we have designed and developed the Smart Grid MR serious game. This serious game proposes a collaborative platform mixing VR environment with real objects, constituting a "mixed reality" concept (for which the acronym "MR" stands for) ( Figure 25). also has to collect gifts, which randomly appear in the virtual map. In contrast to the EV serious games described above, in here the number of presents is unlimited, therefore the player goal is collecting as much as she/he can before she/he loses. More gifts are collected, more difficult becomes the game as the number of trucks and the frequency of appearance of barriers become higher. The player can lose if she/he is captured by the trucks or if the battery in the EV becomes fully discharged. To avoid the latter, There is also a virtual tidal power station in the VR environment, visible on the 3D map, that creates continuous energy from the nearby sea shore. This station is therefore not activated by the players, but it can be turned off through a Python script. The solar panel cannot produce electricity when it is raining in the virtual city and vice-versa. The degree of production of electricity through the three real devices can be followed through three gauges (top right in the Figure 26b). If an energy source reaches the orange zone of the gauge, the energy produced is not injected anymore in the stationary battery during 10 seconds. The worst scenario for the EV driver would be that her/his battery and the stationary battery are fully discharged: in that situation, the player will also lose the game as she/he cannot recharge the Usage and impact Since their creation, we have been using our serious games as Therefore, the added value of these VR games is twofold: to experiment in VR with the concepts seen in the lectures in order to validate the acquired knowledge and to identify through the serious games the students having difficulties to make them to revise the concepts. The immersive and interactive VR characters were also shown to allow students to better understand threedimensional concepts (such as crystal symmetries or electrodes anisotropies). [40] We have also studied in deep how Nanoviewer VR impacts learning performances and motivation by students. [41] For that purpose, psychology undergraduate students (i.e. no batteryspecialists), were split in two groups, one using VR (experimental group) to visualize and interact with a LIB electrode mesostructure (cf. Figure 13), and one having access to images under different perspectives of the same electrode printed on paper (control group Conclusions In this Concept we presented six immersive and interactive VR Even if we did not find any player who complained about cybersickness or mental overloading while playing with our VR serious games, in the next future we are going to study these aspects in more detail in order to optimize even more our serious games ergonomics. We also think that VR can help to overcome inequalities in the education system. Such inequalities affect students with social communication issues, interaction difficulties and/or cognitive and socio-emotional characteristics that require special pedagogical practices: it will be interesting to study the effect of these VR games on those students by considering differences in perception due to gender, autism or high potential. Other perspectives of extension of our VR-related work, includes the combination of VR with Artificial Intelligence for personalized training. The VR concept introduced in this article also paves the way towards digital twins of batteries and processes (e.g. battery manufacturing plants), as wells as to a next generation of computational tools making computational modeling research accessible to a wide spectrum of researchers (including the non-expert ones), beyond also batteries. The idea of performing battery research by "playing" comforts the famous Albert Einstein's quote which says that "play is the highest form of research". [78]
7,068.6
2020-05-28T00:00:00.000
[ "Computer Science" ]
A Computational Approach to Find Deceptive Opinions by Using Psycholinguistic Clues — The product reviews and the blogs play a vital role in giving the insight to end user for making a decision. Direct impact of reviews and ratings on the sale of the product raises a strong possibility of fake reviews. E-commerce sites are often indulged in writing fake reviews to promote/demote particular products and services. These fictitious opinions that are written to sound authentic are known as deceptive opinion/review spam. Review spam detection has received significant attention in both business and academia due to the potential impact fake reviews can have on consumer behaviour and purchasing decisions. To curb this issue many e-commerce companies have even started to certify the reviewers. But it covers an only small chunk of reviewers, so this technique couldn’t be enough to deal with the problem of deceptive opinion spamming. Manually, it is difficult to detect these deceptive opinions. This work primarily focuses on enhancing the accuracy of existing deceptive opinion spam classifiers using psycholinguistic/sociolinguistic deceptive clues. We have formulated this problem in different ways and solve them with many machine learning techniques. This work carried out up on the publicly available gold standard corpus of deceptive opinion spam and achieved up to 92 percent cross-validation accuracy in restaurants and around 94 percent in hotels domain by the final classifier. A detail comparative results analysis has been done for all used machine learning algorithms. Word play is deceptive and so is the human being. Review Language plays a major role in identifying the hidden intentions. Our main focus behind this work is to explore the use of psycholinguistic/sociolinguistic features in order to study the deceptive behaviour of the reviewer. A lot of study has been done by the linguists and psychologists to find verbal and non-verbal clues to deception [2] and establish an association between psycholinguistic features and deception. However in our opinion many of these associations have not been utilized for opinion spam detection. On this basis, we propose an intermediate layer where we identified various computational psycholinguistic measures/matrices and identified their association with the deceptive behaviour of the person based on the studies conducted earlier. To achieve our objective, we build with different computational matrices and on benchmark dataset (classified opinions as spam and non-spam), we observed that these measures are significantly different in spam and non-spam reviews. These measures were used as features for training and testing various machine learning models. This work mainly focuses on:  Formulation of opinion spam detection problem in different ways: genre identification (Informative vs. imaginative writing), linguistic deceptive detection, and traditional text classification problem.  Use of psycholinguistic/sociolinguistic features such as emotion, negativity, tension, anger, personal concern, tone, etc. to understand the intention of the reviewer.  Use of readability and lexical diversity as features in the context of opinion spamming. We have observed that these measures can contribute significantly towards detecting deceptive reviews. However, in our knowledge no preliminary study has been reported on the application of these measures in opinion spamming domain.  Use of SVM (support vector machine), SLDA (stabilized linear discriminant analysis) and ensemble learning techniques to detect opinion spam. We have performed experiment on restaurant and hotel domain on Myle Ott's gold standard dataset [3]. A comparative study and analysis of each approach and corresponding result is given. The rest of the chapter is organized as follows. The second section describes various works related to opinion spamming considering different approaches. Section 3 explains feature identification, construction and justifies their use both logically and statistically. Section 4 includes problem formulation and classification methodology that we have used in this work for deceptive spam detection. Section 5 contains experimental details along with statistical analysis of the result. The last section comprises of the conclusion as well as the future work. II. RELATED WORK The basic definition of spamming refers to web spam that includes email spam or search engine spam which indulges in the action of misleading search engines to rank some web pages higher than they deserve [4].Going beyond the basic definition, Spamming also includes opinion spamming which is comparatively a new field of research. Though a lot of research is going on into the field of opinion mining and sentiment analysis. But only a few of these studies have focused on opinion spam problem and more specifically on deceptive opinion spam detection. Preliminary research has been reported on Amazon reviews [5]. They re-framed the review spam identification problem as duplicated reviews identification problem. Previous attempts for spam/spammer detection used reviewer's behaviors, text similarity, linguistics features, review helpfulness and rating patterns. One of the finest works in the field of deceptive opinion spam identification has been done by integrating psychology and computational linguistics [3]. The author claimed that best performance was achieved by using psychological features with support vector machine (SVM) to detect the deceptive spam with accuracy up to 89 percent on hotel domain. They have also contributed a large-scale publicly available gold standard data set for deceptive opinion spam research. In another approach, author proposed a complementary model to existing approach for finding subtle spamming activities [6]. Thus, it can be combined with other textual feature-based models to improve their accuracy. In their work, authors proposed a novel concept of a heterogeneous review graph and claimed to capture the interrelationship among reviewers, reviews, and stores that the reviewers have reviewed. This model tries to identify suspicious reviewer by exploring nodes of the graph. It also tried to establish the relationship between trustiness of reviewers, the honesty of the review and the reliability of the store. This work has achieved the precision up to 49 percent. However, authors claimed to identify those suspicious spammers that couldn't detect by other existing techniques. As earlier studies suggest, ratings have a high influence on revenue. Higher rating results in higher revenue. Many companies are indulging in insidious practices to get undue benefits. Unfair and biased rating pattern has been studied in several previous works [7], [8]. In one of the approach author identified several characteristics behavior of review spammer and model this behavior to detect the spammer [9] . They derived an aggregated behavior scoring methods to rank reviews according to the degree they demonstrate the spamming behavior. Their study shows that by removing reviewers with very high spam sources, the highly spammed products and product group has experienced significant changes in aggregate rating compared with removing randomly scored or unrelated reviewers. Another approach may involve capturing the general difference of language usages between deceptive and truthful reviews [10]. This model tried to include several domain independent features that allow formulating general rules for recognizing deceptive opinion spam. They used part of speech (POS), psychological and some other general linguistic cues of deception with SAGE [11] and SVM model. The dataset used in this work include following domains, namely hotel, restaurant, and doctor. SAGE achieved much better result than SVM and were around 0.65 accurate in the cross-domain task. Another model that integrates some deep linguistic features derived from syntactic dependency parsing tree was proposed to discriminate deceptive opinions from normal ones [12]. They worked on Ott's data set and a Chinese data set and claim to produce a state of art results on both of the topics. Opinion spamming can be done individually or may involve a group [13]. Group spamming can be even more damaging as they can take total control of the sentiment on the target product due to its size. Their work was based on the assumption that a group of reviewers works together to demote or promote a product. The author has used frequent pattern mining to find a candidate spammer group and used several behavioral model derived from the collusion phenomenon among fake reviews and relation models. III. THEORETICAL FRAMEWORK FOR FEATURE IDENTIFICATION AND CONSTRUCTION In our work, we have considered various well-defined readability, lexical diversity and psychological features along with n-grams measures. Each of these measures can be used to characterize the review. These characteristic measures have been used as features of the review. This work is based on the observation that these features help us to distinguish between deceptive and truthful reviews. A. Readability The creator of the SMOG readability formula G. Harry McLaughlin defines readability as: "the degree to which a given class of people find certain reading matter compelling and comprehensible [14]." It was in 1937 when US government for the first time decided to grade civilians rather than considering them as either literate or illiterate. According to National Center for Educational Statistics (1993), average US citizen reads at the 7thgrade level and when it comes to writing it degrade even further. It has been observed that a review written by average US citizen contains simple, familiar words and usually, fewer jargons compare to one written by professionally hired spammer. This simplicity and ease of words lead to better readability. In particular, we will test the hypothesis that all else equal, higher readability will be associated with the fewer chances of spam. Various readability metrics have been suggested to identify the readability of text. Among them, we have considered only a few well establish readability metrics [14], [15]. To be specific, we computed Automated Readability Index (ARI), Coleman Liau Index (CLI), Chall Grade(CG), SMOG, Flesch-Kincaid Grade Level (FKGL) and Linsear (LIN). As a whole, readability features have been referred as READ throughout this paper. Table 1 below shows the statistical measures for the readability matrices or restaurant domain with respect to truthful and deceptive opinions. Statistics in Table 1 show a significant difference in ARI( two tailed t-test p=0.0045), CLI(two tailed t-test p=0.03), CG(two tailed t-test p=0.02),SMOG(two tailed t-test p=0.01),FKGL(two tailed t-test p=0.01) and LIN(two tailed t-test p=0.03) for truthful and deceptive reviews. B. Lexical Diversity Lexical diversity is another text characteristic that can be used to distinguish between deceptive and truthful opinions. The more varied vocabulary a text possesses, the higher is the lexical diversity of that text. For a text to be highly lexically diverse, the word choice of the writer needs to be different and diversified with less repetition of the vocabulary.Moreover, previous researchers have shown that lexical diversity is significantly higher in writing than in speaking [16], [17]. According to the different studies, lexical diversity is genresensitive [17]. Various search engine optimization(SEO) companies are hired to influence products rating to give undue benefits to hiring companies. They write fake reviews to manipulate customer's opinion about the particular products. When done individually or in a group, an employee writes more than one review to make a significant impact. So these reviews have higher similarity and less lexical diversity. Not only this when they have to write reviews of those products or services of which they are not aware of, then they tend to borrow the vocabulary from the previously written reviews. This phenomenon also leads to low lexical diversity. However, in the case of truth teller, they come with a fresh idea, honest opinion, and experience that leads to higher lexical diversity in comparison to liars. Numerous metrics for measuring lexical diversity exist and each of them has its pros and cons. For example, the traditional lexical diversity measure is the ratio of different words (types) to the total number of words (tokens), the so-called type-token ratio, or TTR [18]. Text samples containing a large number and tokens give a lower value to TTR and vice versa because of its sensitivity to sample size. While D measure which was developed by Brian Richards and David Malvern [19] is independent of sample size but it is also being criticized for being insensitive to sample size [20]. Herdan's C LogTTR G. Herdan , 1960 Even as a traditional classifier feature, lexical diversity can play a significant role. Here we tried to find that how effective lexical diversity is to identify deceptive opinion spam. The combination of all of the lexical diversity metrics is referred as LEX in this paper. Further we will test the hypothesis that all else equal, higher lexical diversity will be associated with the fewer chances of spam. Table 4 below shows various lexical diversity measures for restaurant domain. Statistics shows a significant difference in TTR( two tailed t-test p=0.0272), CTTR(two tailed t-test p=0.0325), MA-TTR(two tailed t-test p=0.0288),MS-TTR(two tailed t-test p=0.0316),log-TTR(two tailed t-test p=0.0173), R(two tailed t-test p=0.0334), S(two tailed t-test p=0.0247), and U(two tailed t-test p=0.005) for truthful and deceptive reviews. C. Psychological and linguistic features It's a well-known fact that lying is undesirable, decent people rarely r lie. And this lack of practice makes them a poor liar. While falsehood communicated by mistake are not lies. People lie less often about their actions, experience and plans. And if they do so, they do lie in pursuit of material gain or to escape the punishment. Deception can be defined as a task to mislead others. People behave in quite different ways when they are lying compared to when they are telling the truth. Practitioners and laypersons have been interested in these differences for centuries [21]. In 1981, Zuckerman, DePaulo, and Rosenthal published the first comprehensive meta-analysis of cues to deception [6]. They reported a huge difference of verbal and nonverbal cues occurred in deceptive communications compared with truthful ones.This study shows that liars make a more negative impression and are more tense. Michal Woodworth revealed that liar produced more sense-based words [22]. In other words, deceptive reviewers are more subjective than the truthful ones. Deceptive liars also use fewer self-oriented words (I, me, mine, we, etc.) but more other-oriented words(You, they, etc.). According to study on deception, liars offer fewer details than the truth teller, not only because they have less familiarity with the domain but also to allow for fewer opportunities to be disproved [23]. To fetch psychological features from text reviews, we have used Linguistic Inquiry and Word Count (LIWC) [24] .It is a transparent text analysis program that counts words in psychologically meaningful categories. Empirical results using LIWC (version 2015) demonstrate its ability to detect meaning in a wide variety of experimental settings, including to show attentional focus, emotionality, social relationships, thinking styles, and individual differences. It is among most popular text analysis tool in social sciences. It has categorized its entire output variable into linguistic processes, psychological processes, personal concerns and spoken categories. We have used its linguistic process (LIWC ling ) and psychological process (LIWC psy ) feature sets. Psychological and linguistic features of LIWC jointly has been referred as LIWC all in this paper. Table 6 shows a list of few LIWC features. D. N-Gram To get the context of the review we have used unigrams (UG) and bigrams (BG). Some generic preprocessing like removing stop words, extra white spaces are done before generating DTM (Document-Term Matrix). Top UG and BG were filtered based on their term frequency and inverse document frequency score. Jointly we have referred UG and BG as N-gram(NG) in this paper. IV. PROPOSED WORK As discussed earlier this paper primarily focusses to improve opinion spam classifiers accuracy by identifying domain independent lingustic and psycholinguistic features. This section of proposed work is divided in three sub sections,. The first subsection focuses on feature identification and construction with the explanation of their significance for opinion spam detection. The second subsection deals with possible ways for problem formulation and explains different strategies and their corresponding feature set to solve them. The third subsection talks about various classification methods used in this work. A. Problem Formulation There are various ways to formulate the problem of detecting opinion spam. Opinion spam can be identified by either using duplicate detection or using classification techniques. Many of the existing literature over opinion spamming have framed opinion spam identification as duplicated opinion identification problem. However, this assumption is not appropriate [33]. Based on the type of spam, this paper reports the study on deceptive opinion spamming. We have tackled the problem to identify opinion spam detection in following three ways. 1) Genre identification between informative vs. creative/imaginative writing The problem of finding deceptive opinion spam can be constituted as genre identification task that whether it's imaginative or informative writing. Imaginative writing is quite different from informative writing. Imaginative writing relies heavily on imagination and motive behind it. It includes representation of ideas, feelings and mental images in words.People behave differently, when have to write something that they have not experienced. For example, when you imagine something rather than experiencing it, you tend to be more negative and tense. However, informative writing comprises much of truth, facts, and experience. It primarily provides information through explanation, description, argument and analysis. An imaginative writing might use metaphor to translate ideas and feelings into a form that can be communicated effectively. We can easily relate imaginative writer to the deceptive reviewer who leaves clues such as more sense based words, lesser facts, etc. Psychological features can play a vital part to distinguish between deceptive and truthful review. People lie most frequently about their feelings and their preference, but less often about their experience, actions and plans. And their lie is clearly visible in their writing when they write a false review about their experience of a product or service. On the other hand,Studies suggest lexical diversity is genre-sensitive [17]. As discussed earlier, vocabulary richness would be higher in informative reviews because of originality in their content. On the other hand when someone tries to write something she/he has not experienced then he might borrow the words and use them repetitively that leads to low lexical diversity. That's why we have used LIWC psy and LEX feature sets to train our classifiers for genre identification problem. 2) Linguistic deception detection This whole problem can also be treated as linguistic deception detection. It focuses upon how effectively linguistic features alone can detect deception. The study suggests that to the extent that liars deliberately try to control their feelings, expressive behavior, and thoughts. Higher are the chances that their performance would be compromised [2]. They would seem less forthcoming, less convincing, less pleasant and more tense. Deceptive spammer leaves various linguistic cues when lies about something. To obtain linguistic deceptive cues we have used LIWC ling feature set. LIWC ling features subsume most of the linguistic features used in the previous research works. Apart from these, we have also used READ feature set. With both of these feature sets, we have developed our linguistic classifiers for this approach. 3) Traditional text classification problem In a most traditional way, this problem can be constructed as text classification problem using various feature sets. We trained various classifiers with all possible combination of our feature sets. Rather than reporting all classifiers we have enlisted only top performing ones. B. Classifiers This section describes various machine learning approach used in this work. For the given set of features, we have trained SVM, stabilized linear discriminant analysis (SLDA), random forest (RF), decision tree(DT), neural network(NN), maximum entropy(ME), bagging and boosting for all three approaches mentioned earlier. Out of all these classifiers SVM, SLDA, RF, bagging and boosting performed better than the rest. SVM [25] is one of the the most powerful technique for non-linear classification. SVM has performed in the related work [5]. It tries to find optimal separating hyperplane between the classes. It uses kernel methods to map the data into higher dimensions using some non-linear mapping. We have used C++ implementation by Chih-Chung Chang and Chih-Jen Lin with C-classification and RBF kernel. Data are scaled internally to zero mean and unit variance for better class prediction. (2) SLDA is Linear discriminant analysis based on left-spherically distributed linear scores. We have used the implementation of LDA for q-dimensional linear scores of the original p predictors derived from the PCq rule [18]. Apart from SVM and SLDA we also have focused on ensemble methods bagging, boosting and random forest. Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution of all trees in the forest. Significant improvements in classification accuracy have resulted from growing an ensemble of trees and letting them vote for the most popular class. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them [26] . While bagging combines multiple classification models or same model for different learning sets. In bagging the final classification is the most often predicted class, voting by these classifiers. Boosting also combines the result from multiple classifiers, but it uses to derive weights to combine the predictions from those models into a single prediction or predicted classification. In both bagging and boosting, we have used decision tree as an individual classifier. C. Dataset As mentioned earlier, we have used the publicly available gold standard deceptive opinion spam corpus for our experiments [3]. This data set is generated through crowdsourcing and domain expert. To construct the dataset, Author mined the truthful reviews of 20 hotel near Chicago from TripAdvisor following the work of Yoo and Gretzel [27]. While, to solicit deceptive reviews, they used anonyms online workers (knowns as turkers). These turkers were told to assume themselves as an employee in the marketing department of the company. These turkers were paid one dollar to write a fake review for the hotel/restaurant. The earlier version of the dataset has reviews of hotels domain only (400 truthful, 400 deceptive). The current version of the dataset have reviews from restaurant domain (200 truthful, 200 deceptive reviews) and hotel domain(800 truthful, 800 deceptive reviews). We have performed our experiments on a current dataset for both domains. The baseline results are shown on an earlier version of the dataset on hotels domain. V. EXPERIMENTS AND RESULTS This work is an extension of myle ott's work on finding deceptive opinion spam. In that work, NG and psycholinguistic features is used to achieve the best accuracy on SVM. This accuracy we have used as baseline result for our experiments as shown in Table 9. They used psycholinguistic features extracted from earlier version of LIWC (version 2007) which we referred as LIWC old in this paper . Author has performed their experiments on an earlier version of the dataset on hotel domains only. To build the classifiers we have extracted 92 text dimensions as a text features from LIWC, twelve metrics of lexical diversity, eight metrics for readability along with unigrams and bigrams from R packages. We have used some standard feature selection techniques to avoid overfitting, improve accuracy, and reduce training time. Not only that but also some time to include redundant features can be misleading to modeling algorithm.We have used Weka [] as a feature selection tool. We tried every attribute evaluation method available in Weka to select best features. Chi-square, information gain, and gain ratio outperformed others. R software is used for the simulation. To check the effectiveness of feature sets, we trained classification models for each of them individually . The table 7 and table 8 shows how different feature set performed individually with different learning methodologies for hotel domain and restaurant respectively. In terms feature sets, we find psychological process most effective to differentiate between truthful and deceptive opinions. A Newer version of LIWC features set ( LIWC all ) is giving a better result than the older version (LIWC old ). The reason of improved performance is the inclusion of new text dimension such as tone, authentic, informal etc. Apart from that, LIWC all feature set is also performing better than LEX and READ also. The difference between these psychological processes supports Zukerman's claim that psychological processes likely to occur more or less often when people are lying compared with when they are telling the truth [28]. To understand this result better, we have to go in what LIWC all subsumes. It determines the degree any text uses positive or negative emotions, self-references, casual words, and 80 other language dimensions. We have also observed a difference in word count, sentence, etc. which also included in this feature set empowers Vrij claim that liars offer fewer details to allow for fewer opportunities to be disapproved [29]. Even though LIWC all is showing good classification accuracy compare to LEX and READ. But LIWC all has more features comparative to both READ and LEX and moreover all these feature sets can work as complementary to each other. Statistics in Table 1,2 and 4,5 shows a clear difference in readability and lexical diversity between deceptive and truthful reviews. Two tail t-test easily rejected the null hypothesis and showing a significant difference. Classification accuracies on these feature sets also give strength to both our hypothesizes which we have assumed earlier about readability and lexical diversity. Table 9 and Table 10 shows the result of all three strategies along with their feature set for all learning models for hotel and restaurant domains respectively. All learning models trained only on n-grams have performed comparatively better than those trained on LEX, READ and LIWC all feature set. It shows that context of the documents needs to be considered, and all other feature sets worked as complementary to improve the accuracy further. On the other hand among all the classifiers, ensemble methods (mainly bagging and boosting) have outperformed others on most of the occasions. figure 2 and 3 shows the best performance for each classifier on 10-fold cross-validation using NG, LIWC all , READ and LEX feature sets for hotels and restaurants domain respectively. By treating deceptive spam detection as a genre identification task we used only genre-sensitive feature sets LIWC psy and LEX. We achieved accuracy up to 83 % for hotels and 81% for the restaurants . In the previous work [3] they used part of speech (POS) as genre identification feature and achieved up to 73% accuracy for hotels. On the other hand treating the problem as linguistic deception detection and using only linguistic features we achieve up to 80% for hotels domain and 79% for restaurant domain. Table 11 and 12 shows micro precision, recall, and f-score for a best-performing method for all strategies on the respective feature set. In our experiments, we have noticed that in most of the cases no significant difference in accuracies between RF and SVM. And an advantage of using random forest over bagging and boosting is that it is faster and relatively robust to outliers and noise. Apart from that, it gives an internal estimation of correlation and importance of the feature which has been shown in Table 13. In this study, we also contrasted with some of the findings in previous research. .For example, across the studies, it has been found that deceptive statements are moderately descriptive and distanced from self compared to truthful ones [30]. In the case of deceptive reviews, we found less total word count and sentence but more self-referencing. Deceptive reviews are less descriptive, and the reason behind it might be the fear of being caught. It also has been observed that to make an impact, spammers either go extremely positive or extremely negative. A clear difference has been seen in negative and positive feature values in both types of reviews. By using different linguistic measures, researchers found that non-naive individuals assigned to be deceptive compared with naive individuals who were truthful showed less diversity and complexity [31]. Our study also supports both the claims as we also found fewer exclusion words that are also a marker of complexity in deceptive reviews and less diversity because in the lack of real experience the spammer borrow experience from other reviewers. VI. CONCLUDING REMARKS It's a widely accepted fact that deceptive spam detection is difficult to detect manually. In this work, we have trained a automated classifier with high accuracy using domain-independent features. We have discovered the relationship between deceptive opinions and lingustic features like readability, lexical diversity. This work has shown different ways to form the problem of deceptive spam detection and effective strategies to solve them. A detail experiment and analysis has been shown for various machine learning algorithms. This paper made many theoretical contributions and contrasted some deceptive assumptions and also strengthen many. Spammers are getting smart every day that's why for future, both domain specific and independent deceptive clues needed to be discovered. One of the possible future direction to evaluate these deception clues to other domains.
6,629
2017-06-30T00:00:00.000
[ "Computer Science" ]
Risk assessment at Puerto Vallarta due to a local tsunami The Jalisco region in western Mexico is one of the most seismically active in the country. The city of Puerto Vallarta is located at Bahía de Banderas on the northern coast of Jalisco. Currently there exists a Seismic Gap in the Northern coast of Jalisco (Vallarta Gap). Historically seismogenic tsunamis have affected the coast of Jalisco. In this work, we assess the risk due to a local tsunami in the city of Puerto Vallarta as a function of the interaction between hazard and vulnerability. We model the tsunami hazard, generation and propagation, using the initial conditions for a great earthquake (Mw ≥ 8.0) similar to those that occurred in 1787 at Oaxaca and in 1995 at Tenacatita Bay, Jalisco. Vulnerability is estimated with available data for the years 2010–2015 with sociodemographic variables and the location of government, commercial or cultural facilities. The area with the highest vulnerability and risk is between the valleys of the Ameca and Pitillal Rivers, extending to a distance greater than 5.1 km from the coastline and affecting an area of 30.55 km2. This study does not consider the direct damage caused by the tsunamigenic earthquake and aftershocks; it assumes that critical buildings in the region, mostly hotels, would not collapse after the earthquake and could serve as a refuge for its users. The first (It) tsunami wave arrives to Puerto Vallarta (Cuale) 19 min after the earthquake with a height (Hi) of 3.7 m; the run-up (At) arrives 74 min after earthquake with a height (Hr) of 5.6 m. Introduction The Jalisco region, in western Mexico (Fig. 1), is one of the most seismogenic regions in Mexico, with many past destructive earthquakes of great magnitude, some of which generated important tsunamis. The largest instrumentally recorded historic earthquake in the twentieth century in Mexico was an M = 8.2, on June 3, 1932, and located off the coast 1 3 of Jalisco. A few days later, on June 18, 1932, a magnitude M = 7.8 earthquake struck the region again. Sánchez and Farreras (1993) have proposed both earthquakes were tsunamigenic. Nevertheless, the most destructive tsunamigenic event in the region was initiated by a probable submarine slump landslide involving sediments provided by the meandering Armería river system accumulated throughout time on the continental shelf that took place on June 22, 1932 (Pacheco et al. 1997;Corona and Ramírez-Herrera 2015). It was responsible for destroying a resort at Cuyutlán (Colima state), causing a maximum water layer height of 15 m and an estimated flooding extent of 1 km along 20 km of coast. In 1995, an M W = 8.0 earthquake occurred off the coast of Jalisco, caused a tsunami that affected a 200-km-long coastline with damage limited to low-lying coasts (Ortiz et al. , 2000Trejo-Gómez et al. 2015). The 1995 earthquake ruptured only the southern half of the area proposed for the 1932 events (Singh et al. 1985), suggesting that the northern coast of Jalisco, including Bahía de Banderas (BdB) (Fig. 1), presents a seismic gap (Vallarta Gap) that might rupture and generate a local tsunami. The BdB region ( Fig. 1) could be affected by tele-, regional and local tsunamis. To date, there is no historical report for significant damages caused by tele-tsunamis or regional Table 1. Modified from Núñez-Cornú et al. (2018) tsunamis at BdB. In the case of tele-tsunamis, the waves reported historically are less than one meter; however, the hazard exists because of the possibility that bay resonance could amplify tsunami waves and generate a seiche that could cause much damage. Dressler and Núñez-Cornú (2007) calculated the bay's eigen-period (resonance period) at T = 2,726 s (45 min 30 s) using a hydropneumatic method and a preliminary bathymetric model of the BdB. Specific sites such as Boca de Tomatlán, Marina Vallarta, Plaza Genovesa and Playa de Los Muertos regions in Puerto Vallarta were evaluated for an incoming wave to BdB of 10 cm amplitude and different arrival periods, and the results vary at different sites with amplitudes higher than 100 cm and oscillations of different periods. Based on the hypothesis of an earthquake that fills the Vallarta Gap (M W = 8.0), Núñez-Cornú et al. (2006) estimate tsunami run-up height (H r ) and analyze the effects in four places in Puerto Vallarta, and they concluded that a tsunami would enter at Pitillal river´s valley as a 2 m ≤ H r ≤ 4 m and in the area of the Ameca river's valley as a 5 m ≤ H r ≤ 7 m. The estimated run-up was imprecise in shallow waters (depth < 50 m), due to the resolution of the information used, because it does not discriminate between different beaches in Puerto Vallarta. For this scenario, a specialized study is required to generate more datasets that characterize the shallow water marine relief in Puerto Vallarta because in these areas the tsunami´s wave is refracted, and hazard level could be increased. An example of flood damage at Puerto Vallarta was a storm surge generated by Hurricane Kenna in October 2002. The waves produced by this hurricane entered land up to a distance of 1 km approximately. Three hotels suffered intense non-structural damage, and the tourist strip was affected with different levels of damage depending on the proximity to the beach, or due to flooding and sand deposition. There were no reported direct victims associated with this event. The coastal community at Puerto Vallarta faces the challenge of mitigating the local tsunami hazard from major earthquakes. This fact requires developing accurate risk reduction measures based on thorough tsunami hazard, vulnerability and risk assessments. Núñez-Cornú and Carrero-Roa (2012) suggest that land managers with an adequate perception of risk could make decisions to prevent a disaster by reducing vulnerability through design actions based on scientific data and risk assessment theory. Risk assessment consists of three phases: evaluation, management and perception. Reducing vulnerability requires mitigation actions such as sustainable land uses, possible civil works and population preparation (knowing how to take shelter to stay safe, etc.). Civil Defense Authorities need to generate contingency response protocols, determine the recovery and reconstruction actions in the affected area and evaluate economic losses in the community due to a false alert. Risk perception is a critical phase of risk assessment, all kinds of factors intervene, starting with the academic level, personal experience and perception, and others such as collective memory and intuition. This occurs at all levels, both in the possible affected and in the authorities. A wrong perception in decision-making can cause a disaster worse than the highest of the proposed scenarios. One such example is the catastrophe caused by Hurricane Katrina in 2005 in New Orleans, USA (Dixon 2015), with more than 1800 fatalities. Katrina remains the costliest disaster in the US history. Another example occurred as a result of the 1985 eruption of Nevado del Ruiz Volcano, Colombia, where a lahar destroyed the city of Armero causing more than 24,000 fatalities, despite the scientific efforts of INGEOMINAS to obtain an accurate volcanic risk map of the Nevado del Ruiz Volcano and the efforts of the Colombian Civil Protection to disseminate it and implement an alert system. "It was difficult for nongeologists (Congress and local officials) (Mileti et al. 1991). Such a disaster occurs after a catastrophic chain of hazard and social events (Smith 2013). For Núñez-Cornú and Carrero-Roa (2012), it is not the lack of information or misperception of hazard that causes the disaster, rather a disaster results from the inadequate management of socially acceptable risk, based on three factors: (a) denying the hazard, (b) maintain the inertia of territory without planning and (c) transfer the costs of risk to others. An assessment of the tsunami risk of a coastal community is the primary initial information required for the design of civil defense education programs to society, and the implementation of protocols by local emergency institutions and the civil defense both in public and private buildings (Hebenstreit et al. 2003). Currently, Puerto Vallarta has the highest population density on the Jalisco coast and the second largest urban area in the state and plays an essential role in the regional economic development, mainly through tourism. The 2020 population census results counted 291,839 (Instituto de Información Estadística y Geográfica de Jalisco INEGI 2020). More than 90% of Puerto Vallarta inhabitants live in our study area (Fig. 2), and there is an average floating population of approximately 50,000 people during the high tourist season, from October to April. The objective of this work is to carry out a first evaluation of the risk that a local seismogenic tsunami represents for the city of Puerto Vallarta. Hazard is modeled from potential tsunami inundation zones based on numerical simulations of a major thrust earthquake occurring in the Vallarta Gap. The vulnerability is obtained with data from public databases. This study does not consider direct damages caused by that earthquake; it assumes that critical buildings in the region, mostly hotels, would not collapse after the earthquake and could serve as a refuge for its users. Tectonic setting and local Tsunamis In western Mexico, three tectonic plates interact, the Rivera Plate (RP) and the Cocos Plate, which are subducted along the Mesoamerican trench (MAT) under the North American Plate (NOAM). This interaction has generated an active fragmentation process (Bourgois and Michaud 1991) of the NOAM in this region, giving rise to a tectonic unit known as the Jalisco Block (JB), proposed by Luhr et al. (1985) that is drifting away the Mexican mainland (Fig. 1). The JB is defined to the north by the extensive structure known as the Tepic-Zacoalco Rift Zone (TZR), which continues east with the Chapala Rift Zone (ChZ) and the Pacific coast, and continues south to the Pacific coast by Colima Rift Zone (CRZ). The CRZ is similar in structure and age to the TZR and is defined on land and offshore by recent seismic activity (Pacheco et al. 1997). The TZR consists of several tectonic depressions with extensional and right lateral movements, which also indicate deep crustal failures between the JB and NOAM. The western border of JB is defined by the MAT. Recent geophysical studies Núñez-Cornú et al. (2016); Carrillo de la Núñez et al. 2019;Madrigal et al. 2021) found that to the north of the Marias Islands, there is no clear evidence of an active subduction zone. Instead, faulting is observed to the west of the Marias Islands, while to the south between the María Magdalena and María Cleofas islands, the subducted slab of the Rivera Plate is delineated by regional seismicity. They also report the existence of a 100-km-long tectonic structure south of Maria Cleofas Island, Sierra de Cleofas (SC). The SC is oriented N-S and marks the boundary between RP and JB, possibly as a result of compression of RP against JB. It establishes the beginning of the current subduction and associated seismic activity. Urías et al. (2016) propose that the existence of Ipala Canyon (IC) is related to extension produced by the abrupt change in RP convergence and that IC may be the southeast limit of a major forearm block (Fig. 1), called Banderas Forearc Block. The Jalisco region has experienced numerous destructive earthquakes of great magnitude with epicenters along the coast and inland. The historical macroseismic data for the region date back to 1544 (Núñez-Cornú 2011). Núñez-Cornú et al. (2018) reported that at least 22 major earthquakes with M ≥ 7.0 took place in the past 474 years. Suter (2019) studied and concluded that the 1563, May 27, M I = 8, earthquake took place offshore Puerto de Navidad (now named as Barra de Navidad), and the estimated rupture area to be similar to the 1932 and 1995 earthquakes. Suter (2018) analyzed the Fig. 2 Study area with population density distribution using AGEBs macroseismic data of the October 2, 1847, Jalisco Earthquake, and concluded that there were two earthquakes the same day. The first one, a subduction type earthquake, took place at 07:30 am offshore Tecomán, Colima with an estimated magnitude of M W = 7.4 ( Fig. 1); the second, a shallow intraplate type earthquake with an estimated magnitude M I = 5.7, took place at 09:30 am and affected the western part of the ChR, destroying the city of Ocotlán and other towns nearby. Although no strong earthquake has been reported in the region on March 12, 1883, Orozco y Berra (1888) reports the occurrence of a tsunami in Las Peñas (currently Puerto Vallarta): "It was observed that the sea withdrew its ordinary beaches to a considerable extent and at a considerable distance from the coast, revealing some mountains and valleys in the background... It is not known with certainty what the ocean brought about in its withdrawal, but after some time, it reoccupied its box with enough noise and impulse". To date, thirteen big destructive earthquakes (M > 7.0) ( Table 1, Fig. 1) associated with the subduction process of RP below NOAM along the Jalisco coast region have been identified. Only for seven of these, are there local data of relevant damage caused by the tsunamis generated. It is necessary to add to this seven tsunamigenic earthquakes, two tsunamis (1883 and 1932-06-22) probably generated by submarine landslides. However, no geological studies have been conducted for Puerto Vallarta to identify damage and/or effects of historical tsunamis. Aida (1978) describes a numerical experiment for tsunami generation based on a seismic fault model, using seismic parameters, and shows that the calculated tsunamis agreed reasonably well with the tsunami records observed at several stations along the coast. The author proposes the existence of a correction factor K to adjust the results. Hazard Since the time of that publication, several different methods to model the seismic source of an earthquake using seismic and geodetic data observed from the earthquake have been proposed (Johnson 1999;Ratnasari et al. 2020;Gusman et al. 2014), as well as different methods to model the displacement of water generated by the seismic source. (Geist 1999;Bryant 2001). In this study, we applied the methodology used in a previous study in Oaxaca, Mexico , as described below, to model the 1787 earthquake tsunami effects, as in this case, there was no seismic model of the source. A seismic source based on the local tectonics was proposed; the theoretical tsunami waves modeled fit fairly well with the description of historical damage. In this case, we assume the rupture will occur on the plate interface and assume dimensions consistent with an M w ~ 8.0, which for the Vallarta Gap is an inverse fault plane of L = 150 ± 30 km, W = 60, km, dipping 11° toward the coast at a depth of 10 km on the interplate region, according to the standard relation: where A is the area in km 2 (Utsu and Seki 1954;Wyss 1979;Singh et al. 1980). The total area was integrated from segments of individual subareas, A i = 30 × 30 km 2 . Twelve segments were used (Fig. 3). The seismic moment Mo i of each of the segments was adjusted individually by varying the coseismic dislocation (d i ) from to the relationship to fit the moment magnitude of the earthquake (M W ) (Hanks and Kanamori 1979). Moment estimates assume a rigidity modulus which has been used previously for this region by various authors: The coseismic vertical deformation of the seafloor as produced by the buried fault plane is computed by using the dislocation model of Mansinha and Smylie (1971) by prescribing a reverse fault mechanism on each one of the segments. Increasing the d i value will increase the value of the Mo i and the M W. For the initial tsunami condition, the sea-level change is taken to be the same as the seafloor uplift calculated from the dislocation model. The propagation of the tsunami is simulated by the vertically integrated long-wave equations (Pedlosky 1982): In these equations, t is time, η is the vertical displacement of the water surface above the equipotential level, h is the depth of the water column, g is gravitational acceleration, and M is the vector of the discharge fluxes in longitudinal and latitudinal directions. These equations are solved in a spherical coordinate system by the method of finite differences with the Leap-Frog scheme (Goto et al. 1997). For computation, the step time was set to 1 s, and the grid spacing of 27 s was used for the whole region, whereas a grid spacing of 3 s was used to describe the shallow areas. For nearshore bathymetry in the study region, from 1000 m depth to the coast, we used data from local navigational charts (SEMAR 2011). No detailed bathymetry of BdB was available (scale 1:4,000 or higher). For depths greater than 1000 m, we used data from the ETOPO-2 data set (Smith and Sandwell 1997). This model generated theoretical tsunami waveforms and arrival times, which were computed along the coast in 24 virtual gauge sensors (theoretical pressure sensors or VTG) off the coast of the Nayarit, Jalisco and Colima states, at depths of 10 m (Fig. 3). Table 4 The tsunami amplification factor because of shoaling from 10 m depth up to the coast is practically negligible and ranges from 1 to 2%. To measure the maximum flood area due to run-up, a digital terrain elevation model (DTM) was generated on land, with cells of 4 m 2 , that was interpolated from the DTM obtained by photogrammetry at the year 2000 (Núñez-Cornú et al. 2006). This DTM allows us to map elevation values equivalents to the run-up model. The tsunami hazard was calculated for a scenario of maximum flooding at high tide with the theoretical tsunami obtained. Different zones were delimited to match the altitude values on the coast of Puerto Vallarta with the synthetic waveforms of tsunami and the variation in the local tide around + 1 m, based on tidal forecasts by Centro de Investigación Científica y de Educación Superior de Ensenada, Baja California (CICESE 2016) for Puerto Vallarta for the years 2012, 2013 and 2016. Vulnerability We follow the same methodologies used by Núñez-Cornú et al. (2006) and Suárez-Plascencia et al. (2008) for the previous city's vulnerability studies, natural hazard's atlas and disaster's reports in México (Guzman et al. 2003;Simioni 2003;Rosales Gómez et al. 2004;García Arróliga et al. 2014). Due to the importance of Puerto Vallarta, there are enough databases on people, homes and facilities in our study area on different online platforms, whether government or private. Data used, such as people and housing, are from different platforms such as those maintained by Consejo Nacional de Población (CON-APO 2012), Instituto Nacional de Estadística y Geografía (INEGI 2010;2013;. Some facilities data in the studied area were updated using Google (2015) application. The analysis in this study consisted of determining population attributes and locations of government offices or facilities that indicate Puerto Vallarta´s vulnerability in the affected area by the local tsunami hazard. We included census information of 214 Basic Geostatistical Areas (Area GeoEstadística Básica [AGEB] as defined by INEGI) and of five rural localities (less than 2500 inhabitants, Fig. 2) for the whole of Puerto Vallarta Municipality. According to Guzman et al. (2003), it is necessary to estimate the population affected if the hazard occurs, for that reason the population projection in the affected area to the year 2015 was calculated; assuming that the growth was natural, the following equation was used: where C is the exponential growth of the population susceptible to hazard, P is the number of inhabitants registered in the last census, e is Euler's number, r is the population growth rate and t is the time in a year, concerning the last available census. Initially in this study, a projection of the affected population was calculated using Eq. 7 for the hazard area, according to the 2010 census data, assuming that the population grows continuously and slowly, the latter assuming that there are no massive migratory movements after the census records available. This is a frequent calculation in risk management studies, using data from the last census. We used a vectorial geographical information system (GIS) to calculate z statistic in population data (Wheater and Cook 2000;Mendenhall et al. 2013); with these values, it was possible to compare different AGEBs per vulnerabilities (attributes) in the affected area. Furthermore, the vulnerability of the population was analyzed according to the age (7) C = P e r•t range (Table 2) with different factors or criterion scores (Gómez Delgado and Barredo Cano 2006) for prioritizing the vulnerability age group. Only for this information layer and before z statistic was calculated, different experiments were carried out to observe which values highlighted the most vulnerable age groups. The highest value was for the population less than six years old, followed by older adults and populations with special needs (limited mobility or cognitive impairment), with the assumption that in these population categories, support would be needed for transport or with precise instructions to facilitate their movement to a shelter. Other population attributes are also observed as the type of housing and availability of services such as electricity and potable water services, Internet and computer availability, these do not represent significant differences in affected AGEBs, and therefore, they are not used in the final maps. We also located within the county the vulnerable facilities that in case of contingency, it is necessary to keep them in operation, like bridges, shopping centers (supply of food), schools, communications and transportation. Facilities were located in GIS and then were reclassified as high vulnerability. In the same way, we added a layer for overall average damages to household goods for affected tsunami areas. The total vulnerability in each AGEB is evaluated in eight information layers or vulnerability criteria, which are reclassified as very high, high, medium and low. In this way, a vulnerability is obtained as a basis for calculating risk (Fig. 4). The study area is divided into eight micro-basins by natural boundaries in the vulnerability map. Then, we observed results in each AGEB and then we reclass vulnerability ratings as low, mean or high for local tsunami for Puerto Vallarta (Fig. 4). Layers with z statistic can be viewed as a density distribution, age ranges, education level, range of motion or learning, occupied population and housing. Risk According to Smith (2013), risk is based on probability and simply stated as probability times loss. When the analysis is undertaken, risk (R) is taken as some product of probability and loss. Tobin and Montz (1997) define a hazard as a potential threat to humans and their welfare 1 3 and suggest that the risk is expressed as the product of the probability of occurrence (hazard) and vulnerability. To evaluate the risk in Puerto Vallarta, we estimate from the modeling of one earthquake with the initial condition for a tsunami and the vulnerability information previously described, according to the relation: where R is the risk, H the hazard and V the vulnerability. Moreover, the following functions are used (Fig. 4): also, the probability factor or dislocation factor is related to the Mo or Magnitude, according to logarithmic Gutenberg-Richter relation (Gutenberg and Richter 1944): Vulnerability information's layers are reclassified in each AGEB (Fig. 4): where v1 is the education, v2: housing, v3: occupied population, v4: age, v5: population density, v6: the range of motion or learning, v7: facilities and, v8: debris cleaning. For the inhabited homes in the tsunami flood area, costs for the loss of household, cleaning and debris transportation were calculated for each micro-basin. For this, a program is used to budget costs and control civil engineering work. The damage costs do not estimate the structural damages to buildings or bridges caused by a great earthquake. Nor do we include the costs associated with the cessation of day-to-day operations for an international airport, international maritime terminal and regional bus station, which we excluded from this study. Also we excluded the costs for the loss of cultural heritage, such as Museo del Cuale (sometimes unrecoverable as an archaeological site) or costs such as the loss of documents in cadaster, special equipment or computer in hospitals, government offices and schools. The risk and vulnerability maps were generated in a raster image manager, for which values for Puerto Vallarta were entered at different points in each AGEB and interpolated with a spline method. Earthquake, fracture area, magnitude and dislocation In this study, different tests were performed for the proposed tsunami by changing the initial earthquake conditions by varying the dislocation while keeping the fracture plane constant. In this work, five hazard scenarios were evaluated with dislocation values d i = 2, 3, 4, 5, and 6 m ( Table 3) for Puerto Vallarta. Pitillal riverside (VTG 5, Fig. 3 and Fig. 5) was the site of the highest run-up calculated along Puerto Vallarta coast, 2.7 m ≤ H r ≤ 8 m, and I t = 20 min (arrival time of first tsunami wave after earthquake), A t = 72 min (H r arrival time). We analyze, in particular, the case for d i = 5 m. Sea level and arrival times The tsunami hazard was obtained from run-up values that affected Puerto Vallarta. We calculated a scenario for maximum flooding assuming the Mw 8.0 earthquake occurred at high tide, and run-up would affect coastal communities in our study region. The synthetic tsunami waveforms outputs generated for the first 10 h after the earthquake for 24 VTG distributed on the southern coast of Nayarit, Jalisco, and north of Colima, heights and arrivals times of tsunami for coastal communities for the studied region are plotted in Fig. 5 and listed in Table 4. In Jalisco coast, the calculated first Tsunami wave (H i ) had an arrival time (I t ) of 11 min after the earthquake and corresponded to the municipality of Cabo Corrientes for the 1 3 VTG near communities Aquiles Serdán (H i = 5.8 m, It = 11 min) and Ipala (H i = 9.0 m, I t = 13 min). At Ipala, the run-up wave (H r ) is 10.9 m and the arrival times (A t ) 37, 60, 96, and 120 min (in this case four "big" waves were generated), other localities see Table 4. Flood by a local tsunami The H r and the most ocean water inland penetration (x) were calculated for our region. Five of eight micro-basins tsunami hazard scenarios were at the maximum, with a runup of 5 m ≤ H r ≤ 9 m and an inland penetration of 0.6 km (Cuale) ≤ x ≤ 5.1 km (Ameca), resulting in a total flood area of 30.55 km 2 . The tsunami flood, Hr and inland penetration comparison between micro-basins Ameca, Salado, Pitillal, Camarones and Cuale is shown Housing and public facilities vulnerability In 2020, in Puerto Vallarta 96% of the population is concentrated in the study area. The remaining people for the municipality live in ≥ 100 rural towns whose elevations are ≥ 20 m. For that reason, the rural towns were not considered in the risk assessment because they are outside the areas directly affected by the local tsunami hazards. By the year 2015, this study calculated that the population in the flood area increases to include 47 AGEB as shown in Fig. 8. By equation C = P(e r•t ) projected 88,316 inhabitants inside the adverse effects of the tsunami hazard area, but data from the 2020 census indicate 83,400 inhabitants. The relative error corresponds to 10% less than the projected values. Land use in Puerto Vallarta has favored the establishment of tourist services, commerce, and high-density housing close to the beach. For the year 2010, there were 33,942 dwellings in the affected area. The services available such as potable water, electricity and drainage between the tourist strip and the rest of the community were compared. It is observed that the indexes are similar in the all municipalities, even in zones of height greater than 20 m; the reason is that there is an acceptable comparable coverage of these services. In the case of a disaster, it is assumed that the first attention would be given to re-establish At the time of this study, there were 320 vulnerable public facilities in the municipality, of which 33% are vulnerable to flooding if a local tsunami occurs. Table 5 shows the number of public facilities and the volume of debris per micro-basin to estimate cleaning costs. The number of vulnerable facilities observed by micro-watersheds is as follows: 45 in Salado, 27 in Pitillal, 14 in Cuale, 12 in Ameca-Mascota and 6 in Camarones. The percentages of vulnerable facilities for the total municipality considering their primary use are the following: schools 15%, government and emergency care 5%, health units 2% and for various uses 11% (such as the distribution of electric energy, fuel storage, shopping malls, entertainment, museums, roads, transportation and bridges). For the vulnerability categories, the z statistic was used; they classified as NULL for the case that in the AGEB, the data are equal to zero or data not available. The low vulnerability group corresponds to the values z ≤ -− 0.5, medium for values − 0.5 < z < 1 and high for values of z > 1. Different ranges of z-score values were established in layers of education level, occupied population and housing by observing in the study area those values that reflect the conditions of hypothesis. High vulnerability values due to population variables such as age or special needs (learning disabilities or cognitive impairment), educational condition and population density were observed at the El Salado, Pitillal and Ameca micro-basins (Las Juntas). The total occupied population of the municipality of Puerto Vallarta in 2010 was 47,676, of which 18,592 are in the tsunami impact zone, and are concentrated in the El Salado basin. We use a raster GIS for the image to show the vulnerability map of Puerto Vallarta for the local tsunami hazard as far as an elevation of 30 m. Map was obtained from 539 points with all the municipality´s information, of which 219 points correspond to AGEB with data for population and housing data and 323 points for the vulnerable facilities (Fig. 9). Risk We estimated the cost of property damage in each dwelling inhabited in the affected area under the assumption that the minimum damage was $1,200 US dollars (which includes damage to some household goods as living room, breakfast room, bed and kitchen), so the total cost calculated for this concept was $27,925,200 US dollars. Cleaning and transporting debris costs depend on the volume calculation. The affected areas were measured in each micro-basin, assuming a debris height of 0.1 m. The condition established that the debris was vegetation, sand and mud and that people should do the cleaning without the use of special equipment. For debris moving and unloading costs, the conditions were to use a dump truck and to transit by road up to a distance of 1 km. The cost calculated for this concept was $6,457,899 US dollars (Table 5). We used a raster GIS for imaging to show the risk in Puerto Vallarta for a local tsunami, and focused in five of eight micro-basins: Ameca, Salado, Pitillal, Camarones and Cuale, where the observed land uses were the hotel zone with medium risk, the housing area and the vital facilities with high and very high risk. We present two maps of Risk: in the first (Fig. 10), we consider the total flood area due to a 5 m coseismic slip; in the second (Fig. 11), an empirical probability factor based on the magnitude (coseismic slip) was considered; a smaller slip is more likely than a large slip; in this case, a probability factor between 1.0 and 0.1 (Fig. 4) was applied to the vulnerability. 3 5 Discussion Puerto Vallarta is located in a seismic region, in which great magnitude earthquakes and local tsunamis have occurred. In this city, the impact due to historical tsunamis is unknown due to a lack of evidence because to date no specific studies have been carried out. We calculated H r and A t for local tsunami to assess the hazard on Jalisco's and southern Nayarit´s coasts using a theoretical seismic source based on the local tectonics. Values used for the coseismic slip or dislocation range from two to six meters (8.0 < Mw < 8.2). Pacheco et al. (1997) propose a maximum slip of 4 m for the 1995 earthquake. Quintanar et al. (2011) propose a maximum slip of 3.2 m for the 2003 earthquake, however, there is no reported tsunami for this earthquake. Trejo- Gómez et al. (2015) used the same method to model the effects of the 1995 earthquake tsunami, but use different slips in some segments of the rupture area to adjust the reported data . The H r estimated in five places of Puerto Vallarta agrees with the studies of Núñez-Cornú et al. (2006). The scales of the data do not allow for the estimation of a tsunami due to the slipping of sediments in the deltas of the rivers in Puerto Vallarta, as happened in Coyutlan, 1932. For this type of tsunami, we estimated more significant destruction because the H r was ≥ 10 m because of the earthquake's impulse due to the collapse of the sediment fringe of the Armeria River. The vulnerability assessment for tsunamis and the methodology used in this study is similar to that used for floods and earthquakes, for the population and facilities affected at the time the hazard occurs (Guzman et al. 2003). These studies are based on the information available for the city of Puerto Vallarta and published by government agencies. The estimated vulnerability of the municipality of Puerto Vallarta due to tsunami flooding would be useful for both the local and regional types after a major local or distant earthquake in the event that the tsunami entered the bay with an amplitude equal to the natural frequency of BdB which could then generate seiches. The results provide some preliminary answers for this particular city, and we did not consider the floating population due to tourism because it varies by the time of year and the characteristics of the lodgings, quality and service costs. Puerto Vallarta is a medium-sized city, and we did not assess the vulnerability of buildings (Simioni 2003;Santos et al. 2014;Voulgaris and Murayama 2014) only the surface area affected by tsunami flooding. In our case study, land use is mostly designated for buildings, hotels, tourist services and commerce and these are closest to the coast. Considering that most hotels have 4 or more levels (susceptible to respond to vibration by the earthquake energy), two assumptions were made: (a) the tsunami generated is of low speed and does not generate too much turbulence like the one that occurred in October 1995 in Tenacatita Bay and (b) buildings are seismic resistant, so they would not collapse. Given these conditions, in some places in Puerto Vallarta, multi-story hotels could serve as vertical evacuation and shelter for the guests themselves and other people in the case of a tsunami. In other places, moving to a safe area is more feasible. Walking to a safe area is feasible in 11 min or 16 min if walking speed (v) is 1.1 m/s or if under faster, more extreme walking conditions v = 0.751 m/s (Ashar et al. 2018), because our model estimated that the first tsunami arrival would be at ≥ 19 min (Table 4 and Fig. 6). It is important not to forget that a family emergency plan is essential for people with special needs (transportation or precise instructions are needed as support to facilitate their movement to a shelter). One factor that increases the risk created by a local tsunami in the Puerto Vallarta coastal strip is the current land use. It has allowed a reduction in protective zones, and high population density construction close to the beach, and in the zones of mangroves in the Ameca and Salado micro-basins. Conclusions The risk area in Puerto Vallarta Municipality affected by flooding due to a local tsunami for an earthquake M W = 8.2 (d i = 5 m) was measured to be 30.55 km 2 approximately. The highest risk was found in the northern study area, between the valleys of the Ameca and Pitillal rivers (calculated area 26.5 km 2 ), because this area contains the highest population density in Puerto Vallarta and at a distance of 0.8 km from the coastline, there are also the regional bus station, the international airport and the international port. We consider that the northern hotel zone could be a vertical evacuation zone for tourists and local people, following such an earthquake and tsunami. Some hotels in Puerto Vallarta could function 1 3 as a refuge; these seismically resistant structures should have more than four levels, while in the south study zone, from Boca de Tomatlán community to Playa Camarones, the risk area is 1.5 km 2 , and it is feasible for people to walk to a safe area from some points. In the area of Puerto Vallarta, the arrival time (I t ) of the first waves (H i ) is between 19 and 23 min after the earthquake with H i between 3.2 and 4.3 m; then, the tsunami warning is the earthquake itself. The run-up (A t ) arrives between 69 and 74 min after the earthquake with a height (H r ) between 4.6 and 6.8 m. The results presented in this work are basic in the design of protocols and action plans for Civil Defense in Jalisco (as escape routes and safe places during the phenomenon). Also, it provides essential information for the territorial managers of the province, who design the Partial Development Plan. A preliminary evaluation of essential cost damage for a simplified house and for the cleaning of debris was also calculated. Maps for tsunami hazard and local vulnerability in Puerto Vallarta were made. The perception of the risk in the minds of the authorities of the Jalisco State will guarantee that economic and social activities are sustainable. The next partial plans of county development must be reviewed, specifically the strip of coastline of up to 20 m elevations in Jalisco State, such that land use favors very low population density and the implementation of a construction standard for seismic-resistant buildings throughout this county.
8,961.4
2021-04-28T00:00:00.000
[ "Environmental Science", "Geology" ]
Agency in Educational Technology: Interdisciplinary Perspectives and Implications for Learning Design Advancing learners’ agency is a key educational goal. The advent of personalized EdTech, which automatically tailor learning environments to individual learners, gives renewed relevance to the topic. EdTech researchers and practitioners are confronted with the same basic question: What is the right amount of agency to give to learners during their interactions with EdTech? This question is even more relevant for younger learners. Our aim in this paper is twofold: First, we outline and synthesize the ways in which agency is conceptualized in three key learning disciplines (philosophy, education, and psychology). We show that there are different types and levels of agency and various prerequisites for the effective exercise of agency and that these undergo developmental change. Second, we provide guiding principles for how agency can be designed for in EdTech for children. We propose an agency personalization loop in which the level of agency provided by the EdTech is assigned in an adaptive manner to strike a balance between allowing children to freely choose learning content and assigning optimal content to them. Finally, we highlight some examples from practice. Introduction Agency has been proposed as the key component of human identity and has attracted considerable research attention in the last decade, especially among scholars interested in the impact of artificial intelligence (AI) and studies of human behavior (e.g., Kahneman, 2003;Bittencourt, Cukurova, Muldner, Luckin, & Millán, 2020). Most generally, being an agent involves "to influence intentionally one's functioning and life circumstances" (Bandura, 2006, p. 164). Yet, the precise definition and operationalization of agency differs across disciplines. Various related terms are used to position and describe the construct (e.g., choice, autonomy, sense of control, self-regulation, empowerment), and diverse methodologies are used to measure its impact on behavior (e.g., observational studies, multimodal trace measurement, questionnaires, and experiments). For educational scientists, learner agency is reflected in children's active involvement in educational activities, which is thought to be a fundamental ingredient in learning (Dunlop, 2003). For philosophers, agency, in the form of subjectification, or freedom to act, constitutes a vital counterpart to socialization (Biesta, 2020). Researchers in psychology refer to people's agency beliefs or feelings and see them as foundational to intentional behavior (Bandura, 2006). Despite these different operationalizations of agency, however, scientists across disciplines recognize agency as a unique human quality that develops across childhood and adolescence and that should be fostered through education (see Duraiappah, van Atteveldt et al., 2022). As technologies gain more data and intelligence, a new era of human-AI interaction is emerging, affording new meanings to the question of agency. More specifically, educational technologies (EdTech for short) increasingly use data-driven algorithms to adapt and automatically personalize content to individual learners. This provides EdTech designers with a challenge: how to deliver "optimal" content without sacrificing learners' agency? Teachers and educators face a similar challenge, as they need to learn how to adapt their practice in light of EdTech that impacts how children can express their agency in classrooms. As outlined in the next section, supporting children's agency should be a priority in EdTech development and use, but the literature is surprisingly quiet about the ways in which agency could be conceptualized, measured, and implemented in EdTech design and use. We address this gap with a human-centric perspective on agency. The Importance of Agency in EdTech for Children EdTech include apps, learning platforms, educational games, and other types of software that are part of the wider children's media landscape. In accordance with the definition of EdTech provided by Escueta et al. (2017), in this paper, we focus on technology that is designed and used with a learning intention in mind. Examples include big EdTech players such as the Epic! subscription library of children's digital books or the ClassDojo web-based learning platform with a digital universe of activities as well as individual apps such as the Book Creator app for story-making. EdTech are of significant commercial value (estimated at USD 106.04 billion in 2021 globally, Grand View Research, 2021) and are a key focus of post-pandemic educational reforms worldwide (Cone et al. 2021). Given the steadily increasing EdTech use in global education, it is of considerable research and practical interest to develop educational technologies that support children's development and learning (Pérez-Sanagustín et al., 2017;Williamson et al., 2020). In the past twenty-five years, EdTech development has been directed towards improving learning platforms, apps, and e-books that personalize learning content to the needs of individual learners (Van Schoors, Elen, Raes, & Depaepe, 2021). EdTech that adapt to the user in the course of learning draw on intelligent tutoring systems or adaptive learning technologies (Walkington, 2013) to automate the interaction between the learner and the digital content . Such personalized EdTech are of most acute interest to the question of agency, given that these systems automate choices and control what learners see on their screens, as opposed to allowing learners to self-regulate their learning (see the "Psychology perspective" section for a brief treatment of the relation between self-regulated learning and agency). Intuitively, we can understand that agency given to children in EdTech solutions differs depending on the level of personalization and automation. For example, with some EdTech, such as the Lexia Reading Core5® reading software, children have very few choices as they are automatically provided with texts and games based on their progress on previous activities, linguistic competence and reading ability. With other types of EdTech, such as the Our Story App developed by The Open University, children have many more choices in stories they create and share, in that they can add their own texts, images, or audio-recordings in any sequence, length, or combination of texts and images. The design of EdTech and with it the extent of user choices and extent of involvement varies across various apps and platforms, which raises the need for agreeing on some parameters or criteria to evaluate the extent and the way in which children's agency is granted or supported by EdTech. Leading evaluation rubrics of EdTech draw on learning theories to specify the learning conditions and design features for specific subject areas, such as literacy, mathematic learning, or learning more broadly (e.g., Cherner & Lee, 2014;Callaghan & Reich, 2018). Examples include Hirsh-Pasek et al.'s (2015) criteria for evaluating educational apps or the six criteria of reading quality for rating the educational quality of children's e-books developed by Kucirkova et al. (2017). These and other existing rubrics (e.g., Papadikis et al., 2018) define the learning conditions under which children using EdTech advance their knowledge and understanding, including safety of EdTech or interaction design that encourages active engaged learning (Zosh, Golinkoff, Hirsh-Pasek, 2017). Despite the high importance of agency for learning, however, the level of agency provided by various EdTech has so far not been considered in previous evaluation rubrics and evidence-based guides, which we believe is a significant gap. Aims and Objectives Agency is an important construct across various sciences of learning, arguably most prominently in the sciences of education, philosophy, and psychology. As an interdisciplinary expert group, we examine key insights on how agency is conceptualized in these three disciplines, which are represented by the authors. We use insights from all three disciplines to define guidelines for how agency can be designed for in personalized EdTech. The issue of agency in EdTech is particularly relevant for children given that EdTech play an increasing role in formal and informal education. Moreover, because of their immature cognitive and metacognitive abilities, children often approach learning opportunities differently than adults, suggesting that they need to be supported in developing skills that allow them to play an agentic role in their own learning (Brod, 2021;Marulis et al., 2019). We therefore focused on research directed at young learners when available in the domain. We conclude with concrete examples and provide a rubric of how agency can be designed for in EdTech. This approach allows us to derive robust interdisciplinary guidance for researchers and designers interested in understanding and implementing agency in EdTech. In sum, the article has two major goals: 1. To provide a brief overview of how agency is conceptualized in education, philosophy, and psychology and to synthesize these into an interdisciplinary perspective on agency. 2. To provide guidelines on how agency can be researched and designed for in children's EdTech. Philosophy Perspective Philosophical discussion of the nature of agency covers a range of complicated phenomena and crosses different sub-fields including philosophy of action, philosophy of mind and psychology, ethics, aesthetics, and epistemology. One way to roughly map this discussion is in terms of different types of agency. These types emerge when philosophers focus on the ways that clustered aspects of normativity impact the expression of agency via actions, practices, and achievements. These discussions are usually offered with respect to one (or maybe two) normative structure(s) at a time, some of which are more directly relevant to the field of EdTech than others: -Epistemic agency (Sosa, 2015) -This is the kind of agency that is sensitive to epistemic norms (e.g., norms regarding when a belief or action is justified), and to epistemic considerations (considerations that influence epistemic behavior such as belief updating, evidence assessment, and evaluation of testimony). Discussion here often focuses on the cognitive capacities agents may exercise as they attempt to follow epistemic norms, or on the capacities or modes of social organization that are central to the promotion of epistemic values, like knowledge or understanding. -Skilled agency (Pacherie and Mylopoulos 2021) -This kind of agency often results from a conjunction of agentive capacities that have been honed by practice and natural talent to produce highly competent or excellent action within or across various domains (e.g., skill at chess, basketball, or painting). -Aesthetic agency (Lopes 2018) -This is the kind of agency that is sensitive to characteristically aesthetic norms and values or the norms and values common to various aesthetic domains (such as, e.g., painting, sculpture, or musical composition). Discussion here might focus upon the capacities agents have to devise, appreciate, follow, and produce aesthetic norms and values. -Moral agency (Arpaly 2002, Stichter 2018) -This is the kind of agency that is sensitive to moral rules, practices, and values. Discussion here often focuses on moral learning, or the human ability to understand and follow moral rules, or the practices governing attribution of praise and blame. -Practical agency (Bratman 2007, Shepherd 2021) -This is a very general kind of agency and concerns the capacities agents have to understand and follow norms of practical rationality, to engage in planning and reasoning, to develop projects that contain multiple goals, and to reason about how best to satisfy these goals. -Group or corporate agency (List and Pettit 2011) -This is a kind of agency displayed by collections of individuals and often by the embedding of individuals within some social architecture (such as a corporation). Discussion here often focuses on the capacities that groups or institutions have of displaying internal structures and external behavior that is identical or analogous to the structures and behavior that normal human agents display. Such discussion is often motivated by a concern to understand the extent to which groups or corporations can bear responsibility for behavior that is not a product of any one individual, but rather of the group taken as a whole. While discussions of different types of agency are usually offered with respect to one (or maybe two) normative structure(s) at a time, they clearly interact and overlap in actual humans. All of us are, to some extent, aesthetic agents, epistemic agents, moral agents, and so on. Some of us display high levels of some of these forms of agency; many of us display only minimal competence (or worse) at some of the forms. How these types of agencies relate to one another in common forms of human interaction remains an open area of philosophical and psychological inquiry. In light of this variety, is it fair to say that we do not have a unified philosophy of agency, but rather many philosophical perspectives on many facets of agency. A complimentary mapping might focus on the fact that much work on agency (often implicitly) characterizes agency as falling along a spectrum of sophistication, resulting in various levels (or degrees) of agency, where these levels vary according to the reliability, flexibility, and sophistication of the capacities that constitute agents. These capacities include perceptual modalities and sensorimotor coordination, as well as cognitive capacities such as reasoning, imagination, language, and metacognition. In the philosophy literature, we also find some discussion of agency at different levels of sophistication. For example, we find some discussion of primitive agency of the sort that insects and arguably even simpler organisms display (Burge 2009); there is perhaps slightly more discussion of the agency displayed by non-human mammals (e.g., Steward 2009); and by far, the most attention has been paid to more sophisticated types of agency human agents are capable of, e.g., autonomous agency (Mele 1995), morally responsible agency (McKenna 2012), self-conscious or self-knowing agency (Anscombe 2000), and shared or joint agency (Bratman 2013). Finally -and promisingly, given our interests in the relevance of conceptions of agency to the development of educational technology -philosophers of agency have recently turned to reflection on the social construction of (aspects of) agency. This has included analysis of the ways that culture, technology, social norms, and practices of socialization (including practices of praising and blaming) scaffold and shape agency from birth to adulthood, for better or worse (see, e.g., Vargas 2013, McGeer 2019. Indeed, Liao and Huebner (2021) have argued that not only other agents, but also things -material artefacts and spatial environments -are often integrated into systems that may be oppressive (racist, classist, sexist, ableist) and thus detrimental to the development of agency. A challenge in the present-day philosophy of agency is that the discipline has little advanced our understanding of children's agency. Instead, children are assumed to represent a paradigm case of less sophisticated, not-fully-formed agency, especially in discussions surrounding legal and moral responsibility. However, a number of ideas drawn from the philosophy of agency more broadly may be relevant to our conceptualization of children's agency. It is clear that agency is a multi-faceted phenomenon, and the influence of our choices regarding educational technology will often impact multiple facets at once. Arguably, educators, parents, and even conspecifics are always building the agency of children, sometimes intentionally, but often incidentally or even accidentally. Thus, we have to ask about the dimensions and aspects of agency that various educational technologies influence, both intentionally and incidentally. We lack space to discuss in detail how this would go regarding every type of agency, but an example here may help. Let us briefly consider epistemic agency. Since the specific content of the norms that influence belief, action, and assertion enjoys little consensus in philosophy, the value of philosophical discussion of epistemic agency for EdTech development might primarily lay in prompts towards EdTech that allows learners to explore the epistemic space. Children face a complex epistemic landscape of fake news and propaganda alongside facts of various importance, and they also face competing claims about evidence, and competing claims about expertise (i.e., who has authority to make certain assertions). Additionally, we know that some belief updating mechanisms are easy to trick -people are more likely to reject evidence that challenges their self-conception (e.g., Porot & Mandelbaum, 2021). EdTech might be developed in ways that allow children to enhance their epistemic agency -to gain familiarity with epistemic behavior such as evidence assessment, propaganda detection, and decisions to change one's mind. Of course, the development of EdTech may take inspiration from philosophical analysis of aspects of agency, but such inspiration needs to be scaffolded by an understanding of educational and psychological factors that drive the development of agency. So, the philosophical perspective on agency must find synergies with these other perspectives. Educational Perspective Education, understood as a human undertaking to gain knowledge, is to a large extent influenced by historical, socio-cultural and political traditions of pedagogies, didactics and national policies. It follows that definitions of agency rely on eclectic approaches and that "an unpacking of the notion of agency needs to be combined with reconnecting agency to the wider social structures in which it is embedded." (Coe, 2013, p. 272). In early childhood education, agency has been substantially theorized and empirically investigated (e.g., Cieciuch, & Topolewska, 2017). In particular, this concerns its relation to identity development from birth to adulthood, building on Erikson's (1963Erikson's ( , 1968 seminal work, and its relation to self-awareness and self-relevance in the field of art education (e.g., Dunn, Gray, Moffett, & Mitchell, 2018;Sakr, 2017). Connecting to Giddens' (1984) work, Redmond (2009) defines agency as the "capacity to act" (p. 544) and describes it in terms of the choices available to young children and children's awareness about these choices. The social justice discourse in education argues that educational environments, including digital environments, need to be designed in ways to socially empower children to make their own choices (Vanbecelaere et al., 2020) and to ensure that all children, regardless of background or predispositions, can actively participate in meaning making, literacy, and learning activities (Hempel-Jorgensen, 2015). Learning environments that disregard children's active participation negatively impact their development (Berthelsen, & Brownlee, 2005) and deficit discourses that position children as lacking certain capabilities, including agency, disregard the collective forces that shape the opportunities available to children to express their individuality and particularness (e.g., Mary & Young, 2018;Carela, 2019). Stenalt (2021) argued for the need for more critical approaches to students' agency in relation to technologies used in higher education and proposed that these should take into account relational, cultural, and technological aspects of studenttechnology learning interactions. Stenalt's (2021) theoretical framework of digital student agency recognizes the reciprocity between students' and technologies' contributions to a learning situation and considers, for example, use of data or digital self-representation. These insights are useful for adult students who can manage their data or choose who and when others have access to their data (Jääskelä, Heilala, Kärkkäinen & Häkkinen, 2021). For young learners, who are the focus of our work, there is a need to strike an optimal balance between children's independent and adult-mediated agency (Eriksson and Lindberg, 2016). EdTech design can disrupt this balance with features that provide or constrain the space for adult-child shared and independent interaction. Parents (Montazami et al., 2022a) and teachers (Montazami et al., 2022b) are cognizant of the qualitative differences in EdTech and the choices they offer to children and have expressed the need for more guidance on the types of EdTech that optimally support children's learning and development. Our article taps into that need and considers the implications of children's agency for selection and implementation of EdTech in classrooms, with attention to EdTech design that is most conducive to children's active participation in learning. Design-based educational research has illuminated the ways in which specific apps and digital books can support children's active contribution in the form of reading, writing, and multimodal composing (e.g., Kucirkova, 2018;Kim, 2022) and the uniqueness of each family in approaching the dynamics of digital storytelling (e.g., Rogers & Bird, 2020). Kajamaa and Kumpulainen (2019) highlighted the influence of creative and power-challenging features of new digital learning environments on children's agency (see also Kumpulainen, Sairanen & Nordström, 2020;Sairanen, Kumpulainen & Kajamaa, 2022), and several qualitative studies conducted in home (e.g., Scott, 2022) and pre-school learning environments (e.g., Ma, Wang, Fleer & Li, 2022) have documented the dynamic ways in which agency manifests in childadult and child-child interactions with technologies. The close attention to contextual and socially co-constructed facets of children's interactions has been a strength of educational approaches that is only beginning to be considered in relation to children's agency and EdTech. Psychology Perspective In psychology, research focuses on the mechanisms of human agency, including the abilities that allow an individual to exercise control over their thoughts and behavior and their perception of these abilities. Psychology mostly uses the term "agency beliefs" to denote that the psychological mechanism of human agency is an individual's perceived capacity to produce desired effects by their actions (Bandura, 2006). A closely related construct is the belief of personal efficacy, which has been suggested as the foundation of human agency and to affect an individual's goal setting and goal striving (Bandura, 1989). Similarly, according to self-determination theory, greater perceived autonomy is related to an increased motivation to learn (Ryan & Deci, 2000). A wealth of studies has demonstrated strong links between self-reported efficacy beliefs and motivation, performance, and overall well-being (e.g., Holden, 1992;Multon, Brown, & Lent, 1991). Agency beliefs thus influence how high people set their goals, how they strive to achieve them, and whether they easily give up in the face of difficulties or persist. Recent research has focused on manipulating agency beliefs by giving people more or less choice (e.g., Bobadilla-Suarez, Sunstein, & Sharot, 2017;Leotti & Delgado, 2011). This research indicates that people actively seek to have choice because they perceive it as intrinsically rewarding. Results of choice preference tasks indicate that people select opportunities that give them choice and that anticipating such an opportunity is associated with increased activity in brain regions involved in reward processing (Leotti & Delgado, 2011). They are even willing to forgo monetary rewards in order to retain agency (Bobadilla-Suarez, Sunstein, & Sharot, 2017). In sum, in line with Bandura's earlier notions, the perception of having control over one's learning has clear beneficial effects for students' motivation. These beneficial effects of agency may even form a positive flow of effects in that having successfully controlled one learning situation boosts learners' self-efficacy, which then benefits learners' goal setting and goal striving in the next learning situation. Studies on declarative learning indicate that the perceived capacity to exercise control over one's learning can have beneficial effects for learning as well. Even if students can only control incidental aspects of the learning context, this benefits their engagement in learning and learning test scores (Cordova & Lepper, 1996). Giving them choices that are of high utility likely leads to higher learning gains, however (Katzman & Hartley, 2020;Markant, DuBrow, Davachi, & Gureckis, 2014). Of note, tricking people by giving them perceived but not actual control over the learning content has likewise been shown to improve their memory for the content (Murty, DuBrow, & Davachi, 2015;Schneider, Nebel, Beege, & Rey, 2018). While there is good evidence for a beneficial effect of agency for simple facts learning, there is little evidence concerning more complex learning scenarios, such as when learners are given the choice of which tasks to work on. On the contrary, a wealth of evidence suggests that most students do not know how to manage their learning effectively (e.g., Bjork, Dunlosky, & Kornell, 2013). This leads to ineffective choices such as the selection of too easy tasks or ineffective learning strategies (e.g., rereading, underlining). The relation between agency beliefs and learning differs between individuals as well. Student characteristics such as their age, knowledge, or metacognitive capacities likely all play a role in determining whether greater autonomy helps or hinders learning (Brod, 2021b). While elementary-school children already prefer tasks that give them some choice (e.g., Brod, Breitwieser, Hasselhorn, & Bunge, 2020), the link between students' agency beliefs and their performance in cognitively challenging tasks has been shown to increase across the elementary and early secondary school years (Chapman, Skinner, & Baltes, 1990). Besides students' beliefs about their agency, the extent to which individuals can exert control over their own learning also depends on their abilities to form intentions, envision future scenarios, monitor their own functioning, and adjust their behavior accordingly (Bandura, 2006). These properties of agency show conceptual overlap with the related constructs of executive functions, metacognition, self-regulation, and self-regulated learning. There is no consensus in the literature to what extent these constructs represent identical abilities or are nested within one another, and it goes beyond the scope of the current article to provide a comprehensive overview of the use of these different constructs in the literature (but see Dinsmore et al., 2008, for a discussion on the theoretical and empirical boundaries between the constructs of metacognition, self-regulation, and self-regulated learning). At the core, these constructs all involve the awareness that individuals have of their own thoughts and behavior and the effort that they make to gain control over them (Dinsmore et al., 2008). Importantly, metacognition and self-regulation abilities are subject to developmental change. Over the course of development, children are increasingly able to reflect on their own learning and the way they can adjust their behavior to optimize performance. This allows children to move from a reactive to a more proactive or "agentic" mode of learning (Marulis et al., 2019). Another related factor to consider in this context is the development of executive functions. Executive functions are a family of mental abilities that play a key role in top-down, goal-directed behavior (e.g., Diamond, 2013;Zelazo et al., 2016). Three core executive functions include inhibition (i.e., the ability to resist inappropriate thoughts or behavior and suppress interference), working memory (i.e., the ability to hold information in mind and work with it), and cognitive flexibility (i.e., the ability to switch between different (mental) tasks or perspectives; see Diamond, 2013, Miyake et al., 2000. These three basic executive functions are essential for more complex cognitive functions such as reasoning, problem solving, and planning (Diamond, 2013) and play an important role in learning and academic achievement (for a review, see Zelazo et al., 2016). As such, executive functions can be considered as a core aspect of self-regulation (Zelazo et al., 2016) or as a means of enabling selfregulation (Roebers, 2017). Importantly, a wealth of research has shown that executive functions, self-regulation, and metacognition show a protracted development (for reviews, see Diamond, 2013;Roebers, 2017;Marulis et al., 2019). This suggests that children and even adolescents may not be able to control their own learning in the most effective and efficient way possible. In line with this suggestion, a recent experimental study investigated how the ability to actively control study behavior affected later memory performance among kindergarten and elementary-school children (Ruggeri, Markant, Gureckis, Bretzke, & Xu, 2019). It found that the benefit of giving learners control over their studying emerged around the age of six and continued to increase across the elementary-school years. The authors speculated that this might have to do with the ongoing development of cognitive and metacognitive abilities necessary for making effective study decisions (Paris & Newman, 1990). In sum, research suggests that the effect of agency on learning increases at least until the early secondary-school years and likely even longer. The Contribution of Philosophy, Education, and Psychology to an Interdisciplinary Perspective on Agency Reviewing these three disciplinary perspectives on agency suggests that there is no simple definition that can capture the notion of agency. Across philosophy, education, and psychology, we find different ways to emphasize aspects of a complex, multi-dimensional construct. Recognizing the complexity and multi-dimensionality of agency is important for present purposes. For if agency is multi-dimensional, then researchers, designers, and users of EdTech must make choices and offer justifications regarding which dimensions to highlight or nurture, and which to ignore. Importantly, EdTech can find inspiration and guidance by drawing from inter-disciplinary research on aspects of agency. Considering how the three disciplinary perspectives on agency complement each other, we see most merit in the philosophical perspective in asking and specifying why agency is important for humans and proposing different types and levels of agency that a learner may have. The educational perspective highlights the dynamic interactions between the child, adult and technology, and the contextually dependent pedagogies that can support learning-relevant interactions. The psychology perspective suggests that giving learners more opportunities to exercise control has beneficial effects on motivation and engagement, which can-provided that the learner has sufficient skills to effectively use this freedom-translate to better learning. Yet, the ability to exercise control is also something that undergoes significant developmental changes, related to the maturation of executive functions and metacognition. In sum, the psychology perspective seems most directly relevant for evaluating and developing EdTech. Yet, the philosophy and education perspectives provide a framework to think about agency and point to the normative aspect associated with EdTech. These perspectives are thus at a high hierarchical level and help frame the more operational perspective of psychology. Psychological theories suggest that whether agency improves children's learning in the context of EdTech will depend on children's learning prerequisites. Of key importance are their prior knowledge of the to-be-learned content as well as their executive function and metacognitive skills (Brod, 2021a(Brod, , 2021b. High levels of prior knowledge in the domain of study facilitate organization of to-be-learned material and free up cognitive capacities, which can then be invested in choosing appropriate learning environments and tasks to work on. Conversely, students with low levels of prior knowledge struggle with choosing appropriate learning environments and tasks to work on (Kirschner, Sweller, & Clark, 2006). Executive functions such as working memory, inhibition, and cognitive flexibility facilitate efficient shifting between tasks and underlie the ability to reason about appropriate learning conditions that suit the individual learner. Metacognitive skills refer to students' ability to evaluate their current learning and to initiate corrective adjustments if needed. They are thus crucial for selecting tasks to work on, monitoring learning progress, and evaluating whether to continue with the task. Executive functions and metacognitive skills together enable an effective self-regulation of one's learning (Roebers, 2017). Research on self-regulated learning combines parts of the educational and psychological perspectives. Effective self-regulated learning is conceptualized as a goal-directed process in which learners consciously make decisions that lead towards their learning goals (Azevedo, 2015). Learners set learning goals to plan their learning and attain these goals by adjusting their strategies (Winne, 2017). They also monitor whether their actions support progress towards their learning goal (Azevedo, 2009). Yet, research has consistently indicated that many learners experience difficulties with adequately self-regulating their learning (Greene & Azevedo, 2010;Järvelä et al., 2013). Consequently, many learners need external support to engage in successful regulation. To complicate issues further, as described above, learners' self-regulated learning is also influenced by learner characteristics such as prior knowledge age, motivation, and context characteristics of the learning environment such as domain and learning topics, emphasizing the need to adjust support to individual learners (Dignath, Buettner, & Langfeldt, 2008). Especially, the latter refers to a body of research around self-regulation enhancing learning environments (Perry, 1998: Dignath & Veenman, 2021. This line of research indicates that a certain amount of choice in learning is important for metacognitive skills to be able to develop. Hence, students should be given enough agency to make decisions and execute control during learning. This leads us to our second research question concerning the ways in which the different levels of agency can be researched and designed for in EdTech to supports various learner needs and prerequisites. How Can Agency Be Researched and Designed for in Personalized EdTech? With increasingly sophisticated possibilities for adapting content to user characteristics, engagement, and performance, EdTech designers face a dilemma: do they allow learners to choose learning content in the order and ways they prefer, or do they assign "optimal" content to learners selected by data-driven algorithms? When AI provides personal recommendations to individual learners, does it extend or limit students' and teachers' choices in the learning content, activities, and environments they engage with? As detailed in the previous section, design that maximizes agency by leaving the choice completely up to the learner is likely to lead to ineffective learning because most learners, and particularly younger ones, do not know how to manage their learning effectively. Conversely, design that minimizes learner agency is likely to interfere with learners' self-regulated learning and potentially reduce the development of executive function and metacognitive skills (Molenaar, 2022). Therefore, what is desirable is an EdTech design that adapts agency levels to different learners and changes the level of agency assigned to a particular learner over time. In what follows, we conceptualize what such an adaptive assignment of agency levels by EdTech could look like. To the best of our knowledge, such an adaptive assignment of agency has not been implemented in current EdTech. However, different commercial EdTech applications afford agency in different ways and at different levels. Therefore, we provide some guiding principles along with a table in which we describe specific examples of EdTech design that correspond to different levels of agency. This table can be applied in future EdTech design as well as in research evaluating its effectiveness. EdTech and Adaptive Assignment of Agency: Theory and Empirics Our conceptualization of an adaptive assignment of agency levels to learners resonates with recent proposals to make personalized education more dynamic by adapting to a specific learner at a specific point in time in the instructional process (see Tetzlaff, Schmiedek, & Brod, 2020). Figure 1 shows how these ideas can be applied to assigning different levels of agency to learners. The simplified version of such a personalization loop consists of three steps: (1) Identification and assessment of relevant learner characteristics (e.g., prior domain knowledge, executive function, and metacognitive skills), which form a student model; (2) algorithmic assignment of level of agency to give to the student based on the student model (see Table 2 for specific examples of agency); (3) learning progress assessment, which uses task performance data to update the student model. Following this personalization loop, the level of agency provided by the EdTech can vary both between different learners and within a particular learner over time. The proposed agency personalization loop is deliberately generic. It is assumed to be applicable to the various ways in which control can be shared between students and EdTech (see Table 2 below). EdTech, particularly those systems that rely on data-based personalization techniques, allow designers to give different levels of control to different learners and at different time points during the instructional process. We argue that the adaptation process should systematically combine characteristics of the learner with their learning progress to determine the level of agency that the learners are provided with, at different points in time. Examples for what this agency personalization loop can look like in practice have already been provided for one parameter of instructional control-task selection. van Merriënboer (2006, 2011) proposed and tested a personalized task-selection model with shared instructional control in which an algorithm preselects a subset of tasks/difficulty levels based on a student model (called learner portfolio here) and then allows learners to make the final selection from this subset. The scope of the preselected tasks is inversely related to learners' prerequisites, thus enabling greater choice for more advanced learners. Performance and invested mental effort are assessed during task execution, and these data are used to update the student model. Initial data from the research group (Corbalan et al., 2006) indicate higher learning scores with shared instructional control than with full system control. In a similar vein, shared control over problem selection has been designed in the context of Intelligent Tutoring Systems (Long & Aleven, 2016). These systems indicate the mastery level of the student on a particular learning objective, which guides Fig. 1 Agency personalization loop. Relevant learner characteristics form an initial student model, which determines the initial level of agency given to a learner. The student model, in turn, shows that the assignment of agency level is updated based on measures of task performance the learner in making the selection of practice activities. In a classroom experiment, Long and Eleven (2016) showed that shared control over problem selection led to better learning outcomes than fully system-controlled problem selection. They also found effects on students' knowledge of problem-selection strategies, but there was no transfer to future learning contexts. Again, this supports the added value of shared control both for learning as well as for the development of knowledge to regulate students' own learning. Taken together, these two examples demonstrate that an adaptive assignment of agency levels to learners is not only desirable but also feasible. Assigning different levels of control over the selection of tasks to students in an adaptive way can benefit both their learning outcomes and their development of self-regulated learning skills. To summarize, the proposed agency personalization loop suggests that the level of agency provided by the EdTech should be assigned in an adaptive manner, taking into account characteristics of the learner and their learning progress as suggested by psychological research. In this way, EdTech can strike a balance between too much agency, which leads to ineffective learning, and too little agency, which hampers learners' motivation and development of self-regulated learning skills. This aligns also with the philosophical and the educational perspective in that it emphasizes the normative point that children should have agency, as well as in that it takes into account importance of interplay between the learner, the teacher/teaching agent and the environment, and the philosophical perspective in not presuming a neutral technology. To the best of our knowledge, such an adaptive assignment of control has not been implemented for other types of instructional control (e.g., task content, appearance), nor has it been widely implemented in EdTech tools. Therefore, in the final section of the manuscript, we will sketch guiding principles for what adaptive EdTech design for agency can look like and how it can be implemented in both design and educational practice. To make these principles as concrete as possible, we will present examples of existing EdTech application that differ in how and to what extent they afford agency to users. EdTech and Adaptive Assignment of Agency: Design and Practice Table 1 provides guiding questions and corresponding examples of the ways in which agency can be researched, understood, and designed for in EdTech. The questions are thought to guide both the theorization of agency in technology-enhanced learning contexts and the practical development of EdTech. The questions reflect the insight of the philosophical perspective that different types and levels of agency should be distinguished and ideas from the educational and psychology perspectives on what can be controlled by children. The ordering along wh-questions (i.e., what, when, where, who) is intended to illustrate that there is no hierarchy among the options, nor is it the case that learner agency either is or is not present. Rather, EdTech designers need to aim for striking an optimal balance between the various options available to learners at different points in their learning journey. For instance, young students who do not have the cognitive capacity and prior knowledge to make effective decisions regarding their own learning path could be given choices regarding the problem selection as discussed above, which may increase their ability to exert control without sacrificing learning effectiveness. Yet, older students or more advanced learners may experience additional benefits when they are given more autonomy regarding the task content and progression. In fact, a task that provides high levels of guidance may even be disadvantageous for more advanced students, an effect known as the "expertise-reversal effect" (Kalyuga, 2007). Table 2 is intended to provide some concrete examples of how agency can be implemented in EdTech. It is thus intended to concretize the design levers listed in Table 1. In accordance with the notion of different types and levels of agency, we separate what can be controlled (i.e., content, sequence, appearance of an activity or task as well as learner's self-representation) from who is exerting the control (i.e., student, EdTech or shared by teacher and student). To make the individual criteria in the table as specific as possible, we provide illustrative examples of existing EdTech. In selecting these examples, we drew upon popular EdTech that are advertised as supporting children's language and/or literacy learning (i.e., reading, writing, vocabulary) in the UK/US app stores. We believe that bringing EdTech designers and interdisciplinary researchers together would be a good way to advance an evidence-based approach to agency in EdTech. The guiding questions can be a good starting point for this exchange. Theoretical Contributions and Limitations Our interdisciplinary review of the agency literature revealed that there are different types and levels of agency and various prerequisites for an effective exercise of agency and that these undergo developmental change. Moreover, the psychology literature shows that it is not only agency itself but also students' beliefs about agency that should be taken into account. By bringing these insights together, we have extended mono-disciplinary approaches to agency and described implications for how researchers study children's agency in different contexts and with different resources. While our conceptualization of agency draws on three key disciplines of the learning sciences, it does not include insights from anthropology, sociology, neuroscience, or other related disciplines. Future research could usefully expand our initial formulations with further theoretical insights from these and other disciplines. Design Contributions and Limitations Our proposed agency personalization loop suggests that EdTech designers must make choices and provide justifications for which dimensions to emphasize and which to ignore when designing various contents, activity flows, and learner representations. The level of agency provided by EdTech should be assigned in an adaptive manner and strike a balance between allowing children to freely choose learning content and assigning optimal content to them. We chose to look specifically at young learners because we think that the issue of developing agency is most pressing there. In addition, we pointed out that control can also be shared between EdTech and teachers. While this reduces the agency of the child, it has the advantage that the teacher knows the child well and is therefore particularly able to support them. We suggest that in future EdTech design, attention should be paid to what can be controlled by learners as they progress their learning and to what extent this helps or hinders their learning and development. Practice Contributions and Limitations When teachers offer learners unlimited choices, it can lead to ineffective learning. But if they control children's choices too much, they can hamper children's motivation and development of self-regulated learning skills. Given the variety of EdTech and the uneven quality of EdTech solutions on the market, teachers need to carefully select which EdTech they use in the classroom. We suggest that teachers reflect on students', EdTech's, and their own levels of agency in an activity and apply our guiding principles for selecting appropriate EdTech products. Given that more agency does not always translate into enhanced learning, and optimal levels of agency differ between learners as well as within a particular learner over time, the extent of agency should be adapted to the specific learner in a systematic manner. In conclusion, EdTech has to take into account the dynamics of children's agency, and we have provided both an interdisciplinary perspective on the topic and practical guidance on how to do so. In this way, educational technologies can strike an optimal balance between learning effectiveness and learner engagement and thus better deliver on their promise of being educational.
9,982
2023-02-28T00:00:00.000
[ "Education", "Computer Science", "Philosophy", "Psychology" ]
chrF++: words helping character n-grams , Introduction Recent investigations (Popović, 2015;Stanojević et al., 2015;Popović, 2016;Bojar et al., 2016) have shown that the character n-gram F-score (CHRF) represents a very promising evaluation metric for machine translation, especially for morphologically rich target languages -it is fast, it does not require any additional tools or information, it is language independent and tokenisation independent, and it correlates very well with hu-man relative rankings (RR) (Callison-Burch et al., 2008). In order to produce these rankings, human annotators have to decide which sentence translation is better/worse than another without giving any note about the absolute quality of any of the evaluated translations. This type of human judgment has been the offical evaluation metric and gold standard for all automatic metrics at WMT shared tasks from 2008 until 2016. Another type of human judgment, direct human assessment (DA) (Bojar et al., 2016), has become additional official evaluation metric for and the only one for These assessments consist of absolute quality scores for each translated sentence. Contrary to RR, the relation between CHRF and DA has still not been investigated systematically. Preliminary experiments in previous work (Popović, 2016) shown that, concerning DA, the main advantage of character-based Fscore CHRF in comparison to word-based F-score WORDF is better correlation for good translations for which WORDF often assigns too low scores. In this work, we systematically investigate relations between DA and both character and word n-grams, as well as their combinations. The scores are calculated for all available translation outputs from the WMT-15 and WMT-16 shared tasks (Bojar et al., 2016) which contain two target languages, English (translated from Czech, German, Finnish, Romanian, Russian and Turkish) and Russian (translated from English), and then compared with DAs on segment level using Pearsons's correlation coefficient. n-gram based F-scores The general formula for an n-gram based F-score is: where ngrP and ngrR stand for n-gram precision and recall arithmetically averaged over all ngrams from n = 1 to N: • ngrP n-gram precision: percentage of n-grams in the hypothesis which have a counterpart in the reference; • ngrR n-gram recall: percentage of n-grams in the reference which are also present in the hypothesis. and β is a parameter which assigns β times more weight to recall than to precision. WORDF is then calculated on word n-grams and CHRF is calculated on character n-grams. As for maximum n-gram length N, previous work reported that there is no need to go beyond N=4 for WORDF (Popović, 2011) and N=6 for CHRF (Popović, 2015). CHRF++ score is obtained when the word ngrams are added to the character n-grams and averaged together. The best maximum n-gram lengths for such combinations are again N=6 for character n-grams and N=2 or N=1 for word ngrams, which will be discussed in Section 4.3. 3 Motivation for adding word n-grams to CHRF A preliminary experiment on a small set of texts reported in previous work (Popović, 2016) with different target languages and different types of DA 1 shown that for poorly rated sentences, the standard deviations of CHRF and WORDF scores are similar -both metrics assign relatively similar (low) scores. On the other hand, for the sentences with higher human rates, the deviations for CHRF are (much) lower. In addition, the higher the human rating is, the greater is the difference between the WORDF and CHRF deviations. These results indicate that CHRF is better than WORDF mainly for segments/systems of higher translation quality -the CHRF scores for good translations are more concentrated in the higher range, whereas the WORDF scores are often too low. In order to further investigate these premises, scatter plots in Figure 1 are produced for CHRF and WORDF with DA for the Russian→English and English→Russian WMT-16 data. 1 none of them equal to the variant used in WMT Figure 1 confirms the findings from previous work, since a number of WORDF values is indeed pessimistic -high DA but low WORDF, whereas CHRF values are more concentrated, i.e. correlate better with DA values. However, these plots raised another question -are CHRF scores maybe too optimistic (i.e. segments with high CHRF score and low DA score)? Certainly not to such extent as WORDF scores are pessimistic, but still, could some combination of character and word n-grams improve the correlations of CHRF? Pearson correlations with direct assessments In order to explore combining CHRF with word ngrams, the following experiments are carried out in terms of calculating Pearson's correlation coefficient between DA and different n-gram F-scores: 1. As a first step, β parameter is re-investigated for DA, both for CHRF and WORDF in order to check if β = 2 is a good option for DA, too; 2. Individual character and word n-grams are investigated in order to see if some are better than others and to which extent; 3. Finally, various combinations of character and word n-grams were explored and the results are reported for the most promising ones. β parameter revisited Previous work (Popović, 2016) reported that the best β parameter both for CHRF and for WORDF is 2 in terms of Kendall's τ segment level correlation with human relative rankings (RR). However, this parameter has not been tested for direct human assessments (DA) -therefore we tested several β in terms of Pearson correlations with DA. It is confirmed that putting more weight on precision is not good, and the results for β = 1,2,3 are reported in Table 1. Both for CHRF and WORDF, the correlations for β = 2,3 are comparable, and better than for β =1. Since there is almost no difference between 2 and 3, and putting too much weight to recall could jeopardise some other applications such as system tuning or system combination (for example, (Sánchez-Cartagena and Toral, 2016) decided to use CHRF1 because CHRF3 lead to generation of too long sentences), we decided to choose β = 2 which will be used for all further experiments. Individual character and word n-grams Individual n-grams were also investigated in previous work, however (i) only character n-grams and (ii) only compared with RR, not with DA. In this work, we carried out systematic investigation on both character and word n-grams' correlations with DA, and the results are reported in Table 2. It should be noted that, to the best of our knowledge, word n-grams with order less than 4 have not been investigated yet in the given context of correlations with RR or DA. Implicitly, the ME-TEOR metric (Banerjee and Lavie, 2005) is based on word unigrams with additional information and generally correlates better with human rankings than the BLEU metric (Papineni et al., 2002) based on uni-, bi-, 3-and 4-gram precision. The results show that, similarly to the correlations with RR, the best character n-grams are of the middle lengths i.e. 3 and 4. The main finding is, though, that the best word n-grams are the short ones, namely unigrams and bigrams. Following these results for individual n-grams, several different experiments have been carried out, involving different character n-gram weights, combining character and word n-grams with different weights, etc., however no consistent improvements have been noticed in comparison to the standard uniform n-gram weights, not even by removing or setting low weight for character unigrams. The only noticeable improvement was observed when word 4-grams and 3-grams were removed. The emergence of CHRF++ Findings reported in the previous section raised the following questions: (i) are word 3-grams and 4-grams the "culprits" for overly pessimistic behaviour of WORDF Table 2: Pearson's correlation coefficients of CHRF and WORDF with direct human assessments (DA) for individual character and word n-grams. Bold represents the best character level value and underline represents the best word level value. bigrams diminish potentially too optimistical behaviour of CHRF? In order to get the answers, the Pearson correlations are calculated for CHRF combined with four WORDFs with different maximum n-gram lengths, i.e. N=1,2,3,4 and the results are presented in Table 3. In addition, correlations are presented also for CHRF and two variants of WORDF (usual N=4 and the best N=2). First, it can be seen that removing word 3-grams and 4-grams improves the correlation for WORDF which becomes closer to CHRF (and even better for one of the two German→English texts). Furthermore, it can be seen that adding word unigrams and bigrams to CHRF improves the correlations of CHRF in the best way. Therefore this is the variant which is chosen to be the CHRF++. Next best option (CHRF+) is to add only word unigrams i.e. words, and this one is the best one for translation into Russian. Possible reasons are morphological richness of Russian as well as rather free word order, however the test set in this experiment is too small to draw any conclusions. Both CHRF++ and CHRF+ should be further tested on more texts and on more morphologically rich languages. Scatter plots presented in Figure 2 visualise the improvement of correlations by CHRF++: WORDF with N=4 (a) is, as already shown, too pessimistic. Lowering the maximum n-gram length to 2 (b) moves a number of pessimistic points upwards, thus improving the correlation. When added to slightly overly optimistic CHRF (c), the points for both metrics are moved more towards the middle (d). Conclusions The results presented in this work show that adding short word n-grams, i.e. unigrams and bigrams to the character n-gram F-score CHRF improves the correlation with direct human assessments (DA). Since the amount of available texts with DA is still small, it is still not possible to conclude which variant is better: adding only unigrams (CHRF+) or unigrams and bigrams (CHRF++). This is especially hard to conclude for translation into morphologically rich languages, since only Russian was available until now. In order to explore both CHRF+ and CHRF++ more systematically, both are submitted to the WMT-17 metrics task for translations from English. For translation into English, only CHRF++ is submitted since it outperformed the other variant for English. For Chinese, only the raw CHRF has been submitted since the concept "Chinese words" is generally not clear. Further work should include more data and more distinct target languages. The tool for calculating CHRF++ (as well as CHRF+ and CHRF since it is possible to change maximum n-gram lengths) is publicly available at https://github.com/m-popovic/chrF. It is a Python script which requires (multiple) reference translation(s) and a translation hypothesis (output) in the raw text format. It is language independent and does need tokenisation or any similar preprocessing of the text. The default β is set to 2, but it is possible to change. It provides both segment level scores as well as document level scores in two variants: micro-and macro-averaged. Table 3: Pearson's correlation coefficients with direct human assessments (DA) of CHRF enhanced with word n-grams together with CHRF and two variants of WORDF: N=4 and N=2. Bold represents the best overall value. Combining CHRF with word unigrams and bigrams further decreases the frequency of such points and also lowers overall CHRF scores pushing the points more towards the middle.
2,559.2
2017-01-01T00:00:00.000
[ "Linguistics" ]
Adiabatic pumping driven by a moving kink in a buckled graphene nanoribbon with implications for a quantum standard for the ampere A quantum pump in a buckled graphene ribbon with armchair edges is discussed numerically. By solving the Su-Schrieffer-Heeger model and performing the computer simulation of quantum transport we find that a kink adiabatically moving along the metallic ribbon results in highly efficient pumping, with a charge per kink transition close to the maximal value determined by the Fermi velocity in graphene. Remarkably, insulating nanoribbon show the quantized value of a charge per kink (2 e ) in a relatively wide range of the system parameters, providing a candidate for the quantum standard ampere. We attribute it to the presence of a localized electronic state, moving together with a kink, whose energy lies within the ribbon energy gap. As a generic quantum pump transfers electric charge between two reservoirs at zero external bias, solely due to periodic modulation of the device connecting the reservoirs [16], new fundamental and practical aspects of any particular pumping mechanism may be unveiled with the charge quantization at nanoscale. Various single-electron pumps were considered as candidates for the quantum standard ampere [17][18][19]: In case the charge pumped per cycle is perfectly quantized (i.e., equal to Q = ne, with n integer) in a considerably wide range of driving parameters, the output current delivered by the device is I P = ne f P , with f P being the external frequency, and the SI unit of current can be rede- 10 −19 A s, with the second defined via the ground-state hyperfine transition frequency of the cesium 133 atom, ν Cs = 9 192 631 770 Hz [20,21]. So far, single-electron pumps with potential to operate as standard ampere are predominantly based on gate-driven quantum dot systems [19]. We argue here, presenting the results of computer simulation of quantum transport, that the electromechanical pump based on a buckled graphene ribbon, which has recently attracted some attention as a physical realization of the classical φ 4 model and its topological solutions (kinks) connecting two distinct ground states [22,23], may also be considered as a counterpart to the above-mentioned single-electron pumps. Earlier [15] we have shown that the system similar to the one presented in Fig. 1, consisting of metallic graphene nanoribbon with armchair edges coupled to heavily doped graphene leads, may operate as an efficient quantum pump, but the charge per cycle is not quantized. Here the discussion is supplemented by (i) taking the case of insulating nanoribbon into account, and (ii) by optimizing atomic bond lengths in a framework of the Su-Schrieffer-Heeger (SSH) model [24] including electron-phonon coupling of the Peierls type [25]. As a result, we find that for an insulating ribbon a single electronic state localized at the kink is well separated from extended states, and the charge pumped is quantized. Topological aspects of the system are crucial to understand the charge pump operation, since the electron-phonon coupling leads to the peculiar arrangement of shortened (lengthened) bonds being perpendicular (parallel) to the main ribbon axis in the kink area, resulting in the electron localization. We also show in this paper that, although the kink shape is well described within the standard molecular dynamics potentials for graphite-based systems (as implemented in the LAMMPS package [26]), for accurate modeling of quantum transport phenomena one needs to include small corrections to the bond length (up to a few percent of the kink area), following from electron-phonon coupling. A minimal quantum-mechanical Hamiltonian, of the SSH type, allowing Buckled graphene ribbon as a quantum pump. Top: Nanoribbon buckled by changing the distance between fixed armchair edges from the equilibrium width of W = 11 a (with a = 0.246 nm the lattice spacing) to W = 0.9W , attached to heavily doped graphene leads (red), each of width W ∞ , separated by distance L 1 . The total ribbon length is L = L 1 + 2W ∞ + 2L s , where L s denotes the distance between the free ribbon edge and the lead edge. The kink is formed near the ribbon center. The ammeter detects the current driven by a moving kink. The gate electrode (not shown) is placed underneath to tune the chemical potential μ 0 in the ribbon area. The schematic potential profile U (x) (bottom left) and the coordinate system (top left) are also shown. Inset: Band structure of the infinite flat ribbon with armchair edges for W = 11 a (solid blue lines) and W = 10 a (dashed red lines). one to model both the kink shape and the transport, is proposed. The remaining part of the paper is organized as follows. In Sec. II we present the model Hamiltonian and our method of approach. In Sec. III we discuss quantum states of a finite section of buckled ribbon (i.e., closed system) with a kink. The conductance and the adiabatic pumping in the ribbon coupled to the leads (open system) is analyzed numerically in Sec. IV. The conclusions are given in Sec. V. A. The Hamiltonian Our analysis starts from the Su-Schrieffer-Heeger (SSH) Hamiltonian for graphene nanostructures [24,27,28], with potential energy describing the covalent bonds [4] where with a constrain i j The kinetic-energy operator for π electrons T [Eq. (2)] includes the hopping-matrix elements (t i j ) corresponding to the nearest neighbors on a honeycomb lattice (denoted by using brackets i j ), with the equilibrium hopping integral t 0 = 2.7 eV. The change in bond length, δd i j = d i j − d 0 , is calculated with respect to the equilibrium bond length, d 0 = a/ √ 3, with a = 2.46 Å being the lattice spacing. The operator c † i,s (or c i,s ) creates (or annihilates) a π electron at the ith lattice site with spin s. The electron-phonon coupling, quantified by the dimensionless parameter β = −∂ ln t i j /∂ ln d i j | d i j =d 0 (to be specified later), is represented by the exponential factor in Eq. (2) replacing standard Peierls form (1 − βδd i j /d 0 ) in order to prevent t i j from changing the sign upon strong lattice deformation. The next two terms in Eq. (1), V bond (3) and V angles (4), approximates the potential energies for the bond stretching and bond angle bending (respectively); see Fig. 2. The parameters K d = 40.67 eV/Å 2 , K θ = 5.46 eV/rad 2 , and θ 0 = π/3 are taken from Ref. [4] and restore the actual in-plane elastic coefficients of bulk graphene in the case of β = 0. Otherwise (for β = 0), a correction to the potential energy per bond can be estimated as with the number of C-C bonds N b . The second approximate equality in the above is obtained by substituting s c † i,s c j,s + c † j,s c i,s ≈ 1.050 (with i and j the nearest neighbors), being the value for a perfect, bulk graphene sheet at the half-electronic filling [29]. The expression for V angles (4) consists of two terms, each involving summation over the three angles ( j) having a common vertex at a given lattice site j (see Fig. 2). First term, ∝ K θ , represents the harmonic approximation for in-plane bond angle bending. For out-of plane deformations, quantified by the height h j of a tetrahedron formed by jth site and its three nearest neighbors, this term represents a fourth-order correction to the potential energy. A realistic description of out-of-plane deformations requires a correction of the ∼h 2 j order (for h j d 0 ). Here we propose a term proportional to the excess angle, with the coefficient V δ ≈ t 0 = 2.7 eV roughly approximating the bending rigidity of graphene [5,6]. The main advantage of such an approach is that it requires no computationally expensive operations since the four-body term (∝ V δ ) depends only on angles (θ ( j) ) earlier determined for the three-body term (∝ K θ ). The validity of our approach, in comparison with standard molecular dynamics treatments [22,23], is discussed later in this paper. B. The optimization procedure Throughout the paper we compare the results obtained in the absence of electron phonon coupling β = 0 and for the dimensionless parameter β = 3; the two values chosen to bound the possible range of β (see Ref. [6]). It is worth to stress here that in the forthcoming analysis physical properties are discussed as functions of the deformation applied, and thus the other choice of β = 0 (being the proportionality coefficient between the local deformation and corresponding correction to the Hamiltonian) may rather shift the characteristic features observed than change the picture in a qualitative manner. In the β = 0, electronic and lattice degrees of freedom are decoupled, and one simply needs to solve a purely classical minimization problem for the potential energy part of the Hamiltonian H SSH [Eq. (1)], given by V bonds + V angles [see Eqs. (3) and (4)]. For the β = 0 case, the average kinetic energy can be calculated as where the factor 2 in the last two expressions follows from a spin degeneracy t i j = −t 0 exp(−βδd i j /d 0 ) if i and j are the nearest neighbors (otherwise, t i j = 0), and ψ ( j) k denotes the probability amplitude for the kth eigenstate of the kinetic energy operator T [Eq. (2)] at the jth lattice site. We further suppose that the eigenstates are ordered such that the energies E 1 E 2 · · · E N at , with the number of atoms N at , and that the number of electrons N el , is even for simplicity. In both cases (β = 0 and β = 3), the numerical minimization of the ground-state energy with respect to atomic positions {R j }, is performed employing the modified periodic boundary conditions in y direction (see Fig. 1). Namely, the system is invariant upon y → y + L and z → − z, forcing the kink formation in a buckled ribbon. The outermost two rows of atoms near each armchair edge are fixed during the minimization (see Fig. 2), and buckling of the ribbon is realized by changing the distance between the fixed edges [see Fig. 2(a)] from W to W < W . Furthermore, the number of electrons is fixed at N el = N at , with N at = 3600 for metallic armchair ribbon (W = 10 a, L = 90 √ 3 a) or N at = 3960 for insulating armchair ribbon (W = 11 a, L = 90 √ 3 a). Although in open system, coupled to the leads, the average N el varies with the chemical potential, such fluctuations (typically, limited to μ < 0.1 eV or, equivalently, N el /N at ≈ 0.18 ( μ/t 0 ) 2 < 2 × 10 −4 ; see Ref. [30]) are insignificant when determining the optimal bond lengths. Alternatively, one can interpret the β = 0 case as a hypothetical N el = 0 situation, leading to bond length modifications not exceeding a few percent (see below). For a fixed W /W ratio and the kink position y = y 0 , the computations proceed as follows. The initial arrangement of carbon atoms is given by where with (x j ,ỹ j ) being the coordinates of the jth atom on a flat honeycomb lattice, and the scaling function The buckle height H in Eq. (11) is adjusted such that C = N b d 0 in Eq. (5). The kink size is fixed at = 5 a, roughly approximating the kink profiles reported in Refs. [22,23]. At first step, we minimize the potential energy term V bonds + V angles , ignoring a constrain given by Eq. (5). This gives us the solution for β = 0 in the Hamiltonian H SSH (1). Next step, performed only if β > 0, involves a further adjustment of atomic positions {R j } such that full ground-state energy E G (9) reaches a minimum. In practice, we determine hopping parameters for given {R j }s, and then find (within the gradient descent method) a conditional minimum of E G with respect to {R j } at fixed values of { c † is c js }s, satisfying a constrain given by Eq. (5). The procedure is iterated until the numerical convergence is reached. Typically, after 3-4 iterations the atomic positions {R j }s are determined with the accuracy better then 10 −5 a. C. Comparison with LAMMPS results A brief comparison of the kink shape following from the numerical procedure described above with the corresponding output produced by the LAMMPS Molecular Dynamics Simulator [26,31] is presented in Fig. 3. In order to quantify the difference in atomic arrangements obtained within different approaches, we choose the maximal absolute displacement of an atom along the x (and y) axis, max |x − x (0) | (and max |y − y (0) |), where the maximum is taken for a subset of atoms with equal initial y (0) coordinates; see Figs. 3(a) and 3(b). It is sufficient to display the data corresponding to a vicinity of the kink, 30 y (0) /a 90, since far away from the kink position (being fixed at y 0 ≈ 58.5 a) both quantities considered become y (0) independent. (As free boundary conditions are applied in case the LAMMPS package is utilized, some y (0) dependencies reappear near the free zigzag edges, but they are much smaller in magnitude than dependencies in the kink area). It is clear from Figs. 3(a) and 3(b) that the LAMMPS results (see red dashed lines) are closer to that obtained with our optimization procedure in the presence of electron-phonon coupling, β = 3 (solid symbols), then for β = 0 (open symbols). Also, x-z views of the system, presented in Figs. 3(c), 3(d), and 3(e), show that approximate mirror symmetry of the kink appears for β = 3 [see Fig. 3(d)] and for the LAMMPS results [ Fig. 3(e)], but is absent for β = 0 [ Fig. 3(c)]. The above observations can be rationalized by taking into account that four-body (dihedral) and long-range Lennard-Jones potential energy terms are included in the LAMMPS package but absent in our model Hamiltonian H SSH (1). In the presence of electron-phonon coupling (β > 0), however, the average kinetic energy T [Eq. (8)] can be interpreted as an effective long-range (and "infinite-body") attractive interaction between atoms, restoring some features related to the Lennard-Jones forces in molecular dynamics (including an approximate mirror symmetry of the kink). A. Bond-length modulation Before discussing the electronic structure of the system, we briefly describe small corrections to the bond lengths appearing in the kink area due to electron-phonon coupling (see Fig. 4), which are essential to understand the results presented in the remaining parts of the paper. In Figs. 4(a) and 4(b) we visualize the spatial arrangements of shortened and lengthened bonds; namely, d i j < d i j i, j=1,...,N at (thick black lines) and d i j > d i j (thin red lines), where the average bond length d i j = 0.998 d 0 for β = 0 and W /W = 0.9, or d i j ≡ d 0 for β = 3 due to a constrain imposed [see Eq. (5)]. Apparently, in the presence of electron-phonon coupling (β = 3), a large rectangular block is formed in the kink area [i.e., for |y − y 0 | 7 a; see Fig. 4(b)], in which almost all bonds oriented in the zigzag direction are shortened (resulting in the hopping element |t i j | > t 0 ) and almost all remaining bonds are lengthened (|t i j | < t 0 ). In the absence of electron-phonon coupling (β = 0) the situation is less clear [ Fig. 4(a)], with a few smaller blocks of shortened or lengthened bonds forming more complex patterns, some of which are isotropic, and some show various crystallographic orientations. The qualitative finding presented above is further supported with statistical distributions of the relative bond length (d i j /d 0 ), determined using all N b = 5749 bonds in the system (for L = 90 √ 3 a and W = 11 a) and displayed in Figs. 4(c) and 4(d). In particular, the distribution for β = 3 [ Fig. 4(d)] is significantly wider than for β = 0 [ Fig. 4(c)]. Also, bimodal structure of the distribution is visible in the presence B. The current blocking In order to understand how the bond-length modulation may affect the transport properties, we focus now on the Dirac points (K and K ) and changes in their positions in the first Brillouin zone due to strain-induced fields (see Fig. 5). Revisiting the derivation of an effective Dirac equation for graphene, one finds that weak deformations introduce peculiar gauge fields, with the vector potential for K valley [6] where c is a dimensionless coefficient of the order of unity, u i j = 1 2 (∂ i u j + ∂ j u i ) (with i, j = x, y) is the symmetrized strain tensor for in-plane deformations [32], and the coordinate system is chosen as in Fig. 1 (i.e., such that the x axis corresponds to a zigzag direction of a honeycomb lattice). For the K valley, the strain-induced field has an opposite sign (namely, For an approximately uniform compression along the x direction occurring in the kink area, we have u x ≈ (W /W )x, u y = y, and the K point is shift by δA ∝ − (1 − W /W )k x withk x being a unit vector in the k x direction, while the K point is shift by −δA, as visualized in the top panels of Fig. 5. Away from the kink area, buckling without changing bond lengths does not create strain-induced fields (δA ≈ 0) [33]. Additionally, a finite size along the x direction introduces the well-known geometric quantization, with the discrete values of quasimomentum k x , separated by k x ∼ π/W (see bottom panels in Fig. 5). In principle, for a particular combination of δA and k x , a nanoribbon may locally change its character from metallic to insulating (or vice versa). In a more general situation, if δA and k x are not precisely adjusted to alter the system properties at E = 0, one can find some finite energies (E > 0 for electrons or E < 0 for holes), for which quantum states are available only away from the kink area (or only in the kink area). A direct illustration is provided with the density of states discussed next. C. Density of states We consider here two nanoribbons with armchair edges, one of width W = 10 a (the metallic case) and the other of W = 11 a (the insulating case). The system length is L = 90 √ 3 a in both cases, with modified periodic boundary conditions (see Sec. II B) applied for both lattice and electronic degrees of freedom. The two values of β = 0 and β = 3 in the Hamiltonian H SSH [Eq. (1)] are considered; the buckling magnitude is fixed at W /W = 0.9. The above parameters allow us to define the two energy scales: The subband splitting and the longitudinal quantization In Fig. 6 we display the electronic density of states with E n denoting the nth eigenvalue of the kinetic-energy operator T given by Eq. (2), for all four combinations of β and W . For plotting purposes, the δ function is smeared by a finite ; namely, we put finite section: in the former case, ρ(E ) is elevated for any E , whereas in the later case, we have ρ(E ) ≈ 0 in a vicinity of E = 0. The effects of electron-phonon coupling can be summarized as follows. In the metallic case, bond length modulation results in small splittings of the electronic levels [see Fig. 6(b) for β = 3], originally showing approximate degeneracy [see Fig. 6(a) for β = 0], due to amplified scattering between the k y and −k y states occurring in the kink area. In the insulating case, we have two energy levels, appearing for β = 3 [see Fig. 6(d)] but absent for β = 0 [see Fig. 6(c)], one for electrons (marked with red arrow) and one for holes, which occur in the gap range and are well separated from other levels, suggesting that they are associated with localized states. The above expectation is further supported with local density of states (presented Fig. 7) where the δ function is represented via Eq. (17) and the remaining symbols are the same as in Eq. (8). Adjusting the energy to the isolated electronic level appearing in the insulating case (W = 11 a) at E = 0.04 t 0 , we immediately find that the corresponding quantum state is strongly localized in the kink area (see right panel in Fig. 7). In the metallic case (W = 10 a), the value of E = 0.04 t 0 belongs to a continuum of extended states in the lowest subband, but the corresponding ρ loc (R j , E ) profile shows a clear suppression in the kink area (see left panel in Fig. 7), allowing one to expect that the current propagation in y direction may be blocked, in the presence of a kink, for a whole energy window corresponding to the lowest (or highest) subband for an electron (or holes). IV. CONDUCTANCE AND ADIABATIC PUMPING IN OPEN SYSTEM In this section we present the central results of the paper concerning transport properties of the open system (finite section of a nanoribbon attached to the leads) presented in Fig. 1. A. Simulation details So far we have discussed several characteristics of the closed system with modified periodic boundary conditions in the y direction (see Sec. III), making the kink position (y 0 ) irrelevant for global characteristics, such as the density of states. Now we use the atomic positions {R j } = {(x j , y j , z j )} obtained with the optimization procedure described in Sec. II B (again, we consider the cases without and with the electron-phonon coupling, β = 0 and β = 3) for y 0 = 3 8 L. Next, the kink is placed at the desired position (say, y 0 + y) by applying a shift to all y coordinates, y → y + y. A series of consecutive shifts, such as visualized in Fig. 8, emulates the kink motion (including full kink and antikink transitions) in a real system. In case the shift is commensurate with the longitudinal ribbon periodicity, y = √ 3na with n integer, we simply apply modified periodic boundary conditions for all atoms, for which y j + y < 0 or y j + y 0. Otherwise (i.e., if y = na), atomic positions after a shift {R j } y are determined via third-order spline interpolation , and x is the floor function of x. The hopping-matrix elements (t i j ) in Eq. (8) are then determined using atomic positions after a shift {R j } y , but we set t i j = 0 in case i and j are terminal atoms from the opposite zigzag edges (i.e., periodic boundary conditions are no longer applied for electronic degrees of freedom). The leads, positioned at the areas of x < 0 and x > W in Fig. 1, are modeled as perfectly flat (i.e., t i j = −t 0 for the nearest neighbors i and j) and heavily doped graphene areas, with the electrostatic potential energy U ∞ = −0.5 t 0 (compared to U 0 = 0 in the ribbon area, 0 < x < W ), each of the width W ∞ = 17.5 √ 3 a (corresponding to 11 propagating modes for E = 0). What is more, both leads are offset from the free ribbon edges by a distance of L s = 7.5 √ 3 a, suppressing the boundary effects. The scattering problem is solved numerically, for each value of the chemical potential μ = E − U 0 and the kink position y 0 , using the KWANT package [34] in order to determine the scattering matrix which contains the transmission t (t ) and reflection r (r ) amplitudes for charge carriers incident from the left (right) lead. B. Landauer-Büttiker conductance The linear-response conductance is determined from the S matrix via the Landauer-Büttiker formula [35,36], namely where G 0 = 2e 2 /h is the conductance quantum and T n is the transmission probability for the nth normal mode. In Fig. 9 we compare the conductance spectra for the same four combinations of parameters W and β as earlier used when discussing the density of states (see Fig. 6). This time, results for a buckled ribbon, with W /W = 0.9 and a kink placed at y 0 = 3 8 L, are compared with the corresponding results for a flat ribbon (solid blue and dashed red lines in Fig. 9, respectively). In the metallic case, electron-phonon coupling strongly suppresses the transport in the presence of a kink [ Fig. 9(b)]; the effect of a kink is much weaker in the absence of electron-phonon coupling [ Fig. 9(a)]. Similar effects can be noticed in the insulating case, provided that the chemical potential is adjusted to the first conductance step above (or below) the gap range [Figs. 9(c) and 9(d)]. C. The pumping spectra In the absence of a voltage bias between the leads, the charge transferred solely due to adiabatic kink motion (i.e., 165408-7 by varying the parameter y 0 ) can be written as [16] where the summation runs over the modes in a selected (output) lead. We further notice that molecular dynamics simulations of Refs. [22,23] allow us to estimate typical kink velocity (up to the order of magnitude) as v kink ∼ 1 km/s v F , where v F = √ 3 t 0 a/(2h) ≈ 10 6 m/s is the Fermi velocity in graphene, justifying the adiabatic approximation [37]. Numerical results for Q(μ), obtained by shifting the kink from y 0 = 0 to y 0 = L, are presented in Fig. 10. Although the current blocking in the metallic case is far from being perfect [see Fig. 9(b)], the related pumping mechanism for W = 10 a appears to be rather effective (see top panel in Fig. 10), with Q(μ) approaching the total charge available for transfer in a section of the length L eff , a value of which can be approximated by [38] where we put L 1 L eff L 1 + W ∞ estimating the effective length of a ribbon section between the leads (see shaded area in Fig. 10). Significant changes to the Q(μ) spectra are observed in the insulating case of W = 11 a (see bottom panel in Fig. 10). Namely, there is an abrupt switching between Q ≈ 0 near the center of a gap (at μ = 0) and Q ≈ 2e appearing for μ exceeding the energy level localized in the kink area [see Fig. 6(d)]. The value of Q ≈ 2e remains unaffected until μ approaches the bottom of the lowest electronic subband [corresponding to the first conductance step in Fig. 9(d)]. For higher μ, the picture becomes qualitatively similar to this for a metallic case, with Q(μ) systematically growing with μ and decreasing with W /W . Noticeably, the plateau with Q ≈ 2e is well developed starting from moderate bucklings, W /W ≈ 0.95. For W /W ≈ 0.9, deviation from the quantum value in the plateau range is of the order of | Q − 2e| ∼ 10 −4 e, and can be attributed to the finite-size effects. Some stronger deviations may appear in a more realistic situation due to the finite-temperature and nonadiabatic effects, which are beyond the scope of this work. In both (metallic and insulating) cases, the stability of numerical integration in Eq. (21) substantially improves for the lead offsets L s 5 a (being comparable with the kink size), for which parts of the ribbon attached to the leads, together with a section between the leads, are (almost) uniformly buckled for either y 0 ≈ 0 or y 0 ≈ L. In Fig. 10 we also display maximal bond distortions for different bucklings (see the inset), showing that local deformations |δd i j | < 0.1 d 0 for all 0.9 W /W < 1. V. CONCLUSIONS We have demonstrated, by means of computer simulations of electron transport, that buckled graphene nanoribbon with a topological defect (the kink) moving along the system may operate as an adiabatic quantum pump. The pump characteristic depends on whether the ribbon is metallic or insulating. In the former case, even for moderate bucklings (with relative bond distortions below 10%) the kink strongly suppresses the current flow, and shifts the electric charge when moving between the leads attached to the system sides. In turn, the charge pumped per cycle is not quantized. For insulating ribbon, there are electronic states localized near the kink (with energies lying within the energy gap) which can be utilized to transport a quantized charge of 2e per kink transition (with the factor 2 following from spin degeneracy), providing a candidate for the quantum standard ampere. Remarkably, the current suppression, and subsequent effects we have described, are visible after the bond-lengths optimization for the Su-Schrieffer-Heeger model is performed, introducing significantly stronger bond distortions than the classical (a molecular-dynamics-like) model optimization. Therefore, electron-phonon coupling appears to be a crucial factor for utilizing the moving kink for adiabatic quantum pumping in buckled graphene ribbons.
7,050.4
2020-10-16T00:00:00.000
[ "Physics" ]
Analyzing Linguistic Differences between Owner and Staff Attributed Tweets Research on social media has to date assumed that all posts from an account are authored by the same person. In this study, we challenge this assumption and study the linguistic differences between posts signed by the account owner or attributed to their staff. We introduce a novel data set of tweets posted by U.S. politicians who self-reported their tweets using a signature. We analyze the linguistic topics and style features that distinguish the two types of tweets. Predictive results show that we are able to predict owner and staff attributed tweets with good accuracy, even when not using any training data from that account. Introduction Social media has become one of the main venues for breaking news that come directly from primary sources. Platforms such as Twitter have started to play a key role in elections (Politico, 2017) and have become widely used by public figures to disseminate their activities and opinions. However, posts are rarely authored by the public figure who owns the account; rather, they are posted by staff who update followers on the thoughts, stances and activities of the owner. This study introduces a new application of Natural Language Processing: predicting which posts from a Twitter account are authored by the owner of an account. Direct applications include predicting owner authored tweets for unseen users and can be useful to political or PR advisers to gain a better understanding on how to craft more personal or engaging messages. Past research has experimented with predicting user types or traits from tweets (Pennacchiotti and Popescu, 2011;McCorriston et al., 2015). However, all these studies have relied on the assumption that tweets posted from an account were all written by the same person. No previous study has looked at predicting which tweets from the same Twitter account were authored by different persons, here staffers or the owner of the Twitter account. Figure 1 shows an example of a U.S. politician who signs their tweets by adding '-PM' at the end of the tweet. Staff posts are likely to be different in terms of topics, style, timing or impact to posts attributed to the owner of the account. The goal of the present study is thus to: • analyze linguistic differences between the two types of tweets in terms of words, topics, style, type and impact; • build a model that predicts if a tweet is attributed to the account owner or their staff. To this end, we introduce a novel data set consisting of over 200,000 tweets from accounts of 147 U.S. politicians that are attributed to the owner or their staff. 1 Evaluation on unseen accounts leads to an accuracy of up to .741 AUC. Similar account sharing behaviors exists in several other domains such as Twitter accounts of entertainers (artists, TV hosts), public figures or CEOs who employ staff to author their tweets or with organi-zational accounts, which alternate between posting messages about important company updates and tweets about promotions, PR activity or customer service. Direct applications of our analysis include automatically predicting staff tweets for unseen users and gaining a better understanding on how to craft more personal messages which can be useful to political or PR advisers. Related to our task is authorship attribution, where the goal is to predict the author of a given text. With few exceptions (Schwartz et al., 2013b), this was attempted on larger documents or books (Popescu and Dinu, 2007;Stamatatos, 2009;Juola et al., 2008;Koppel et al., 2009). In our case, the experiments are set up as the same binary classification task regardless of the account (owner vs. staffer) which, unlike authorship attribution, allows for experiments across multiple user accounts. Additionally, in most authorship attribution studies, differences between authors consist mainly of the topics they write about. Our experimental setup limits the extent to which topic presence impacts the prediction, as all tweets are posted by US politicians and within the topics of the tweets from an account should be similar to each other. Pastiche detection is another related area of research (Dinu et al., 2012), where models are trained to distinguish between an original text and a text written by one who aims to imitate the style of the original author, resulting in the documents having similar topics. Data We build a data set of Twitter accounts used by both the owner (the person who the account represents) and their staff. Several Twitter users attribute the authorship of a subset of their tweets to themselves by signing these with their initials or a hashtag, following the example of Barack Obama (Time, 2011). The rest of the tweets are implicitly attributed to their staff. Thus, we use the Twitter user description to identify potential accounts where owners sign their tweets. We collect in total 1,365 potential user descriptions from Twitter that match a set of keyphrases indicative of personal tweet signatures (i.e., tweets by me signed, tweets signed, tweets are signed, staff unless noted, tweets from staff unless signed, tweets signed by, my tweets are signed). We then manually check all descriptions and filter out those not mentioning a signature, leaving us with 628 accounts. We aim to perform our analysis on a set of users from the same domain to limit variations caused by topic and we observe that the most numerous category of users who sign their messages are U.S. politicians, which leaves us with 147 accounts. We download all the tweets posted by these accounts that are accessible through the Twitter API (a maximum of 3,200). We remove the retweets made by an account, as these are not attributed to either the account owner or their staff. This results in a data set with a total of 202,024 tweets. We manually identified each user's signature from their profile description. To assign labels to tweets, we automatically matched the signature to each tweet using a regular expression. We remove the signature from all predictive experiments and feature analyses as this would make the classification task trivial. In total, 9,715 tweets (4.8% of the total) are signed by the account owners. While our task is to predict if a tweet is attributed to the owner or its staff, we assume this as a proxy to authorship if account owners are truthful when using the signature in their tweets. There is little incentive for owners to be untruthful, with potentially serious negative ramifications associated with public deception. We use DLATK, which handles social media content and markup such as emoticons or hashtags (Schwartz et al., 2017). Further, we anonymize all usernames present in the tweet and URLs and replace them with placeholder tokens. Features We use a broad set of linguistic features motivated by past research on user trait prediction in our attempt to predict and interpret the difference between owner and staff attributed tweets. These include: LIWC. Traditional psychology studies use a dictionary-based approach to representing text. The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) consisting of 73 manually constructed lists of words (Pennebaker et al., 2015) including some specific parts-of-speech, topical or stylistic categories. Each message is thereby represented as a frequency distribution over these categories. Word2Vec Clusters. An alternative to LIWC is to use automatically generated word clusters. These clusters of words can be thought of as topics, i.e., groups of words that are semantically and/or syntactically similar. The clusters help reduce the feature space and provide good interpretability. We use the method by to compute topics using Word2Vec similarity (Mikolov et al., 2013) and spectral clustering (Shi and Malik, 2000;von Luxburg, 2007) of different sizes. We present results using 200 topics as this gave the best predictive results. Each message is thus represented as an unweighted distribution over clusters. Sentiment & Emotions. We also investigate the extent to which tweets posted by the account owner express more or fewer emotions. The most popular model of discrete emotions is the Ekman model (Ekman, 1992;Strapparava and Mihalcea, 2008;Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise. We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment (Mohammad and Turney, 2010, 2013). Using these models, we assign sentiment and emotion probabilities to each message. Unigrams. We use the bag-of-words representation to reduce each message to a normalised frequency distribution over the vocabulary consisting of all words used by at least 20% of the users (2,099 words in total). We chose this smaller vocabulary that is more representative of words used by a larger set of users such that models would be able to transfer better to unseen users. Tweet Features. We compute additional tweetlevel features such as: the length in characters and tokens (Length), the type of tweet encoding if this is an @-reply or contains a URL (Tweet Type), the time of the tweet represented as a one-hot vector over the hour of day and day of week (Post Time) and the number of retweets and likes the tweet received (Impact). Although the latter features are not available in a real-time predictive scenario, they are useful for analysis. Prediction Our hypothesis is that tweets attributed to the owner of the account are different than those attributed to staff, and that these patterns generalize to held-out accounts not included in the training data. Hence, we build predictive models and test them in two setups. First, we split the users into ten folds. Tweets used in training are all posted by 80% of the users, tweets from 10% of the users are used for hyperparameter tuning and tweets from the final 10% of the users are used in testing (Users). In the second experimental setup, we split all tweets into ten folds using the same split sizes (Tweets). We report the average performance across the ten folds. Due to class imbalance -only 4.8% of tweets are posted by the account owners -results are measured in ROC AUC, which is a more suitable metric in this setup. In our predictive experiments, we used logistic regression with Elastic Net regularization. As features, we use all feature types described in the previous section separately as well as together using a logistic regression model combining all feature sets (Combined). The results using both experimental setups -holding-out tweets or users -are presented in Table 1. Results show that we can predict owner tweets with good performance and consistently better than chance, even when we have no training data for the users in the test set. The held-out user experimental setup is more challenging as reflected by lower predictive numbers for most language features, except for the LIWC features. One potential explanation for the high performance of the LIWC features in this setup is that these are low dimensional and are better at identifying general patterns which transfer better to unseen users rather than overfit the users from the training data. Table 1: Predictive results with each feature type for classifying tweets attributed to account owners or staffers, measured using ROC AUC. Evaluation is performed using 10-fold cross-validation by holding out in each fold either: 10% of the tweets (Tweets) or all tweets posted by 10% of the users (Users). Analysis In this section we investigate the linguistic and tweet features distinctive of tweets attributed to the account owner and to staff. A few accounts are outliers in the frequency of their signed tweets, with up to 80% owner attributed tweets compared to only 4.8% on average. We perform our analysis on a subset of the data, in order for our linguistic analysis not to be driven by a few prolific users or by any imbalance in the ratio of owner/staff tweets across users. The data set is obtained as follows. Each account can contribute a minimum of 10, maximum of 100 owner attributed tweets. We then sample staff attributed tweets from each account such that these are nine times the number of tweets signed by the owner. Newer messages are preferred when sampling. This leads to a data set of 28,150 tweets with exactly a tenth of them attributed to the account owners (2,815). We perform analysis of all previously described feature sets using Pearson correlations following Schwartz et al. (2013a). We compute Pearson correlations independently for each feature between its distribution across messages (features are first normalized to sum up to unit for each message) and a variable encoding if the tweet was attributed to the account owner or not. We correct for multiple comparisons using Simes correction. Top unigrams correlated with owner attributed tweets are presented in Table 3, with the other group textual features (LIWC categories, Word2Vec topics and emotion features) in Table 2. Tweet feature results are presented in Table 4. LIWC Features r Name Top Words .111 FUNCTION to,the,for,in,of,and,a,is,on,out .102 PRONOUN our,we,you,i,your,my,us,his .101 AFFECT great,thank,support,thanks,proud,care .098 SOCIAL our,we,you,your,who,us,his,help,they .107 PREP to,for,in,of,on,at,with,from,about .095 VERB is,are,be,have,will,has,thank,support Word2Vec Clusters Features r Top Words .079 great,thank,support,thanks,proud,good,everyone .049 led,speaker,charge,memory,universal,speakers .047 happy,wishing,birthday,wish,miss,wishes,lucky .042 their,families,protesc,children,communities,veterans .042 an,honor,win,congratulations,congrats,supporting .042 family Our analysis shows that owner tweets are associated to a greater extent with language destined to convey emotion or a state of being and to signal a personal relationship with another political figure. Tweets of congratulations, condolences and support are also specific of signed tweets. These tweets tend to be retweeted less by others, but get more likes than staff attributed tweets. Tweets attributed to account owners are more likely to be posted on weekends, are less likely to be replies to others and contain less links to websites or images. Remarkably, there are no textual features significantly correlated with staff attributed tweets. An analysis showed that these are more diverse and thus no significant patterns are consistent in association with text features such as unigrams, topic or LIWC categories. Conclusions This study introduced a novel application of NLP: predicting if tweets from an account are attributed to their owner or to staffers. Past research on predicting and studying Twitter account characteristics such as type or personal traits (e.g., gender, age) assumed that the same person is authoring all posts from that account. Using a novel data set, we showed that owner attributed tweets exhibit distinct linguistic patterns to those attributed to staffers. Even when tested on held-out user accounts, our predictive model of owner tweets reaches an average performance of .741 AUC. Future work could study other types of accounts with similar posting behaviors such as organizational accounts, explore other sources for ground truth tweet identity information (Robinson, 2016) or study the effects of user traits such as gender or political affiliation in tweeting signed content.
3,442.6
2019-07-01T00:00:00.000
[ "Computer Science" ]
Controlling Dispersion Characteristics of Terahertz Metasurface Terahertz (THz) metasurfaces have been explored recently due to their properties such as low material loss and ease of fabrication compared to three-dimensional (3D) metamaterials. Although the dispersion properties of the reflection/transmission-type THz metasurface were observed in some published literature, the method to control them at will has been scarcely reported to the best of our knowledge. In this context, flexible dispersion control of the THz metasurface will lead to great opportunities toward unprecedented THz devices. As an example, a THz metasurface with controllable dispersion characteristics has been successfully demonstrated in this article, and the incident waves at different frequencies from a source in front of the metasurface can be projected into different desired anomalous angular positions. Furthermore, this work provides a potential approach to other kinds of novel THz devices that need controllable metasurface dispersion properties. S.1 Current distributions on the Unit Cell The current distributions on a unit cell with different parameter L at 250GHz are presented Fig. S1 for references. Note that the two loops and I-shaped dipole are designed to be proportionally changed with respect to L, for considerations of controlling the reflection phase curve. As L is changed, other parameters are fixed as given in the caption of Fig. S1. It is also clear that as L is increased from 240 to 360μm, the currents are concentrated on the outer loop, the inner loop, and the edges of the I-shaped dipole, respectively. The current distributions in Fig. S1 indicate that the three components can resonate individually at 250GHz by properly changing their physical sizes. S.3 Control of Range and Slope of Phase Curve To control the dispersion of the DCTM, the unit cell is extensively studied to explore its electromagnetic properties. Fig . S4 shows the control of the slope and range of the reflection phase by combining different physical parameters. In Fig. S4 (a), there are 14 curves in total which can be divided into three groups. Each group of the phase curves has a common cross point, meaning that different slopes as well as different reflection phase ranges can be obtained as parameter w2 is changed from 10 to 30 μm. Meanwhile, three groups have three horizontally separated cross points which provide more solutions to control the slope and range of the reflection phase. In Fig. S4 (b), several examples are shown to simultaneously control the slope and range by the physical parameter b, and comparatively only the phase range is tuned in Fig. S4 (c). In Fig. S4 (c), a set of parallel and linear phase curves can be obtained as w1 varies. The three figures mentioned above give us clear information that the required phase by the DCTM to control the dispersive properties can be satisfied by the proposed unit cells. The operating principle of the controllable range and slope of the reflection phase is as follows. There are three resonant components in the unit cell, between which the mutual coupling dominates the range and slope of the reflection phase. Stronger electric coupling between three components, which is determined by the two gaps with a size g3 in Figure 3b, will push the three resonances closer, leading to a linear reflection phase curve. The phase range is controlled by the separation of the three resonances. For a very small g3, strong mutual coupling will make the three resonances indistinguishable, resulting in the reduced reflection phase range. Comparatively, the magnetic coupling between three components is determined by the two gaps with dimensions g1 and g2. Improving the magnetic coupling can also enhance the reflection phase range, but at the cost of reduced reflectivity due to stronger concentration of the electric current distributions on the three components. S.4 Phase Requirements in Metasurface Grating Design With the point source placed at position (0, 0, F) and F = Dx = Dy/2 = 25λ0 at the center frequency f0, we assume that the metasurface is discretized into 50 × 100 unit cells along the x-and y-axis directions, which is larger than the Required phase (deg) Numerical order of unit cells on Line 1 In Fig. S6 (a), the required phase of the unit cell at different position on Line 2 versus normalized frequency is shown, as the phase curve of the first unit cell is assumed beforehand. This transition point is located at the 33 rd unit cell, and the slope has already become positive from a negative value. This conclusion can also be proved by the data in Fig. S6 (b). For the curve of different frequencies, there is a common cross point at the 33 rd unit cell, across which the slope of the required phase curve versus frequency is actually reversed. For the plane-wave incident case, the required phase by the unit cell is actually independent on the size of the DCTM along the y axis. Therefore, only the desired properties of unit cells along Line 1 are shown in Fig. S7. It can be seen from Fig. S7 (a) that the required phase curve for the unit cell far away from the starting point of Line 1 becomes 6 steeper, but no phase advance versus frequency is observed. Meanwhile, the slope of each curve is much smaller than that in Fig. S5 (a), and the total required reflection phase range is also much smaller. In Fig. S7 (b), the phase curves are linear instead of the nonlinear ones in Fig. S5 (b). It is obvious that design of such a DCTM in the plane-wave incident case is much easier than the general case studied above. Required phase (deg) Numerical order of unit cell on Line 1 Fig. S7. For the plane-wave incident case, the required reflection phase of the unit cells on Line 1 versus (a) frequency and (b) numerical order of the unit cell. The phase curve of the first unit cell is the same to that in Fig. S5. The phase curves in Fig. S7 (b) become linear, instead of the nonlinear ones in Fig. S5 (b). S.5 Reflection Phase Database A database of the reflection phase of the unit cell is firstly built to map the physical sizes to the reflection phase at 200, 225, 250, 275 and 300GHz, respectively. The five parameters and the discretized steps are shown in Table IV. Here the dimensions of the unit cells are fixed to be Lx = Ly = 350μm, when taking the oblique incidence on the metasurface into account. The first four is discretized by a step of 5μm to reduce the required computational time in the full-wave simulations. After a careful parametric sweeping, an interpolation process is performed to obtain more detailed phase values with a step size of 0.5μm for the first four parameters, and parameter b is discretized by a step of 0.01 within the interpolation process. Then, a six-dimensional database has been established, in which the index of each element indicates the physical sizes of the unit cell and the value of that element corresponds to the reflection phase, as mentioned in the main content. S.6 Fabricated Prototype Photographs of the fabricated metasurface prototype are shown in Fig. S8. Its total sizes are 17.5 × 14 mm 2 in the xand y-axis directions. From the zoom-in view in the inset of Fig. S8, it can be seen that there are still some defects in a small portion of unit cells, which are one of the reasons causing differences between simulations and measurements. Fig. S8. Fabricated DCTM prototype and the zoom-in view. A ruler is placed beside the fabricated prototype for reference. The aluminum plate with very small roughness like a mirror can be clearly seen. S.7 Error Analysis There are many possible errors causing the discrepancies between measurements and simulations. The most significant are as follows.
1,815.8
2015-03-23T00:00:00.000
[ "Materials Science" ]
Linguistics meets economics: Dealing with semantic variation We explore here what happens in conversation when listeners encounter variation as well as change in semantics. Working within a general Gricean framework, and in ways somewhat akin to the “Cheap Talk” model of Crawford and Sobel (1982) and the “Rational Speech Act” model of Goodman and Frank (2016), we develop here a transactional view of communicative acts, based largely on insights drawn from economics. Taking a novel perspective, we build on what happens when communication misfires rather than examining what makes for successful communication. We see this effort as a demonstration of the utility of taking an economic perspective on linguistic issues, specifically the analysis of communicative acts. Introduction. Human conversation is inherently interactional, involving, minimally, two interlocutors -a speaker and a listener -who attempt to transmit information through a series of utterances taken in turns; minimally, one utterance which may or may not provoke a response. There are several approaches and even formalized models that recognize this basic structure of conversation. Among the first such approaches, and perhaps the most influential one to date, is that of philosopher of language Herbert Paul Grice and his maxims of conversation (Grice 1975), given in (1) in our adaptation from various sources: (1) a. Maxim of Quality: Be truthful. b. Maxim of Quantity: Be spare but informative -say no more than is needed to get your message across. c. Maxim of Relation: Be relevant. d. Maxim of Manner: Be clear. Grice's approach assumes that the interlocutors are acting in good faith to convey and receive a message. Among other similar but more formal models is that of Crawford and Sobel (1982), the so-called "Cheap Talk" model, which considers the strategic dimensions of information transmission between two agents whose incentives may not be aligned. Notably, each agent recognizes and accounts for the strategic machinations of the other. Most recently, too, there is the Rational Speech Act (RSA) model of Goodman (2012, 2014). The RSA has been defined by Goodman and Frank (2016:819) as "a class of probabilistic model that assumes that language comprehension in context arises via a process of recursive reasoning about what speakers would have said, given a set of communicative goals", and characterized by Yuan, Monroe, Bai, and Kushman (2018) as a system in which actors in a speech act -those serving both as literal speakers and literal listeners and as pragmatic speakers and pragmatic listeners -probabilistically "recursively reason about each other's mental states to communicate". Within the context of exploring what happens in conversation when there is variation as well as change in semantics, we propose yet another interactional model, one that shares some characteristics with the three mentioned above, but is based instead on insights drawn from economics. Moreover, it starts from a different perspective; rather than exploring what makes for successful communication, it takes as its point of departure what happens when communication misfires. Although drawn from economic principles, our approach differs from the extant economic literature in that we abstract away from strategic considerations. In other words, we suppose that speakers have aligned incentives in communicating, and there is no dissembling. This extremely simple setting allows us to isolate and identify some fundamental mechanisms. The leading issues motivating the present study are the questions in (2): (2) a. Can there be change in semantics? b. Assuming an affirmative answer to (2a), what happens when such occurs? In what follows, we address these questions, and develop a perspective on them, and especially on (2b), that draws on economic theory, ultimately viewing conversation and the extraction of meaning from conversation in transactional terms. Our two basic observations are -on the linguistics side -that some constraint must necessarily exist on meaning change, and -on the economics side -that the consequences and costs of miscommunication produce just such a constraint. On certain rare occasions, the costs of even a minor miscommunication can be enormous, as when NASA lost its $125 million Mars Climate Orbiter because engineers had used conflicting units of measure. More puzzling than these large but rare errors, though, is the surprising frequency with which miscommunication causes (comparatively) modest yet non-trivial losses. As a leading illustrative example, we consider semantic discord in the entrepreneurial finance world. The associated frictions have real and non-negligible costs. This bolsters our notion that we have identified a relevant and applicable constraining force on semantic change, but at the same time, it raises further questions on both the linguistic and economic fronts. Specifically, why do we see as much costly miscommunication as we do? What linguistic forces and economic mechanisms are at work? We leave these questions for future work, and concentrate here on the foundational issues set forth in (2). 2. An answer to (2a). One answer to the first question, (2a), is no, there is no semantic change per se, no change in meanings in the way that there are changes in the shape of a morpheme (e.g. adjectival suffix -lic of Old English becoming Modern English -ly) or changes in syntax (Israeli Hebrew yeš li ha sefer 'I have the book' (literally, 'there-is to-me the book') becoming for some speakers yeš li et ha sefer (literally, 'there-is to-me OBJECT.MARKER the book'). That is, starting with a view of semantics in which the meaning of a linguistic form involves a linkage between phonological material and a real-world referent, one could take the position that except in cases of newly created entities -i.e., true inventions like computers or telephones, even the wheel prehistorically -all real-world referents -i.e., all the semantic content that could be attached to phonological forms -are already present and available; 1 it is just that not all are linked to particular sequences of phonemes for all speakers in all languages. 2 In this view, therefore, a case like the famous shift of meaning 3 for bead, from 'prayer' (first attested c. 885 and in use in this meaning up to 1554) to 'small perforated ball… used for keeping count of prayers said' (first attested 1377) and then to a more general sense of 'small perforated body, spherical or otherwise, of glass, amber, metal, wood, etc., used as an ornament, either strung in a series to form a necklace, bracelet, etc., or sewn upon various fabrics' (first attested c.1400) should not be seen as a change on the semantic level as such. The semantics, understood as the set of real-world referents, of entities to be associated with particular forms, stays the same, but there is instead a change in which referent -a prayer or a small ball or just what -is connected to a particular lexeme, a particular string of sounds. The same holds for shifts involving content that is less concrete than that with bead. Consider the verb impeach, for instance. In the late 14 th century (c. 1380) and early 15 th century (1425), it had a general sense of "to bring a charge or accusation against; to accuse of, charge with"; that meaning gave way by no later than the second half of the 16 th century (1569) to a more specific one of "to accuse of treason or other high crime or misdemeanor (usually against the state) before a competent tribunal", and even later (18 th /19 th centuries), especially in the American context, came to refer even more specifically to accusing an elected official of a high crime or misdemeanor (as in the US Constitution ii, §4). Thus, we see the progression given in (3): (3) impeach  'accuse of wrong-doing' (in general, c. 14 th century)  'accuse of treason or high crime' (c. 16 th century)  'accuse an elected official of a high crime' (c. 18 th /19 th century) Thus, in this view, what is at issue is lexical change, a change in a given lexical entry, a kind of reattachment, as it were, between a form and a referent, and not "semantic" change per se, not a change in the real-world entities that linguistic forms are attached to. This view becomes important when we turn to a consideration of variation and meaning. Variation in the meaning of a lexical item. It is well established in socio-historical linguistics that one can treat variation between speakers of ostensibly the same language as evidence of change. Working from the assumption that the variation is the result of a change that one speaker or a subset of speakers have undergone, differences in details of usage between speakers is indicative of a change in the language as a whole. By simply noting the variation, one might not be able to determine the directionality of the change, i.e., whether Speaker A's feature x represents the older state or Speaker B's feature y does, but the fact of a change is clear. Among the types of variation that one can observe, along with different pronunciations for words, different realizations of sounds, different syntactic constructions, and so on, there are differences between speakers as to the meanings attached to particular words/forms. Thus, to continue the present-day example from section 2, American English speakers vary as to the certain sense they represent the same institution. Such is also the case with individuals: any of us at age 4 is different from the same individual at age 30 or age 60. The content of the universe of real-world referents is not altered in the PRAYER to BEAD case discussed immediately below, but it is in the Yankees case. 2 We recognize that there is more to semantics than just lexical meaning, including scope relations, truth conditions, and the like. We focus here just on the content side of the semantics of a given linguistic form as it is a phenomenon that is more readily evident synchronically, as discussed below, and more readily studied from a diachronic perspective. 3 Except where specifically noted, all definitions here, and dates of attestation and usage, are taken from the on-line version of the Oxford English Dictionary (oed.com). meaning -the real-world referent in the view espoused in section 2 -attached to the word impeach. Some see it as meaning 'formally accuse a public official of a serious wrong-doing that can lead to ouster from office' whereas others take it to mean 'formally accuse a public official of a wrong-doing and remove that person from office'. This latter sense, and the variation that speakers show, is indicated by a Quora question from 5 Sept 2018: "If President Bill Clinton was impeached, why did he not leave the office?", suggesting confusion on the part of the questioner in the face of other speakers who understood or used the word differently. There are also caveats in Wikipedia pointing in that direction: "Impeachment is the process by which a legislative body levels charges against a government official. Impeachment does not in itself remove the official definitively from office". Such caveats would not be needed if there was no variation in the use of this word. For expository purposes for the moment, we refer here to the variation in the use of impeach, and cases like it, as "semantic variation". In that case, then, "semantic" variation reveals an important observation concerning variation in general. In particular, semantic variation between speakers is different in kind from variation in other components. That is, in cases of phonetic variation, such as [εg] vs. [ejg] for egg, or of morphological variation, such as derived noun competence vs. competency (from adjective competent) or past tense costed vs. cost, or plural octopuses vs. octopi vs. octopodes, or of syntactic variation, such as needs washed vs. needs to be washed vs. needs washing, what is at issue is really two (or more) ways of saying the same thing. It is clear that the meaning is constant across the variants and just the forms differ. Such cases therefore meet the classic definition given by Labov and others in the variationist school of sociolinguistics for a "linguistic variable". Moreover, for speakers, such variants are recognizably different ways of saying the same thing. However, with variation in meanings (referents) attached to a form, as with impeach, it is not a case of two ways of saying the same thing but rather the saying of two different things. That is, attached to the same form there are two meanings, two sets of real-world referents or consequences, that are different, even if related. The relevance of this observation needs to be understood against the backdrop of what we refer to as a "transactional view of communication", as explained in the next section. A transactional view of communication, explained. We start with the premise that language use is inherently transactional, involving an attempt at a communicative act minimally between two speakers, an attempt by one speaker to engage in a transaction by which another speaker, an interlocutor and in this case actually a listener, gains knowledge of some meaning that the first speaker wanted to convey. As noted in section 1, we assume that incentives are aligned such that strategic considerations, deception, dissembling, etc. can be ignored. That is to say, an utterance (UA) by one speaker (SA) is designed to elicit a response of some sort by another speaker (SB), either in the form of a rejoinder utterance (UB) or simply an acknowledgment, tacit or overt, that the information SA has conveyed in UA has been taken in. 4 If, in such a transaction, SA uses a word that SB has never heard or uses a recognizable word in a way that SB has never heard it used, then SB will not necessarily be able to understand the message that SA is trying to convey, introducing a disruption into the exchange and thus disruption in communication and a blocking of the effectiveness of the transaction. This transactional view offers a dimension to analyzing a communicative exchange that is economic in nature. That is, with any miscommunication, there is opportunity cost, minimally, the cost (in time and energy) of repeating oneself or negotiating with one's interlocutor to get the intended message across, since one could be doing something else if repetition/negotiation were not needed. In ordinary conversation, the costs of such miscommunications are generally minimal, especially in terms of time, even if real, perhaps at best a minor annoyance. But there can be contexts in which the costs of miscommunication can actually be very large, so that the consequences of a misunderstanding due to semantic variation can be serious, as outlined in section 5. A case in point -An answer to (2b). We illustrate the potential for dire consequences of semantic variation resulting in a disruption in the success of a communicative act by looking at potential interactions between venture capitalists (VCs) and entrepreneurs. The starting assumptions of the interlocutors are clear: entrepreneurs seek funding from VCs and VCs seek good investments. Communication is very costly in real terms, involving low-bandwidth, and it is rather decentralized. What we have in mind here is face-to-face meetings, selected on the basis of brief email exchanges, meetings which entail significant monetary costs (travel, etc.) and considerable opportunity cost in that entrepreneurs and VCs alike could be pursuing other leads. Of course, some search-friction is inevitable, but miscommunications exacerbate the costs, in the case, for instance, of face-to-face meetings that turn out to be a waste of both parties' time. 5 Anecdotally, such situations are surprisingly common. 6 For instance, if a VC states as their avowed goal that they are interested in finding a unicorn, in the current sense of 'highly successful start-up', that would seem reasonable and entrepreneurs might act accordingly and contact said VC if they thought they had a start-up that fit that description. It would be a waste of the entrepreneurs' time if the VC meant s/he was literally chasing the mythical one-horned beast, and the entrepreneurs interpreted unicorn differently from what the VC intended. 7 Admittedly, this unicorn scenario may seem far-fetched, and to be sure, unicorn probably has not caused significant confusion, but it makes the point about possible consequences of a 5 Our simplifying assumption of truthfulness in this setting with capital-hungry entrepreneurs is not nearly as tenuous as it might at first appear to be. At the phase of lengthy, detailed interaction, VCs will only be fooled by the most elaborate and meticulous web of lies. Under the mild assumption that few entrepreneurs attempt massive fraud, there will typically be little reason for an entrepreneur to lie with the aim of getting a follow-up meeting that will be useless. 6 Author Clark-Joseph was a co-founder of the medical company Valisure LLC, and these anecdotes are drawn from his half-decade spent in the world of entrepreneurial and venture finance, and from his associates in that world. 7 In case evidence of this start-up-related meaning is needed, we can point to the results of a google search of 28 December 2019: of the seven most likely auto-completions in a search asking just "how many unicorns", four pertained to the mythical beast ("how many unicorns … are left in the world/are there in the ark/are left/are in the ark"), two were ambiguous ("how many unicorns … does the us have/are in the us"), and one was clearly to be understood in the start-up sense ("how many unicorns … are profitable"). There is also a recent coinage decacorn, meaning 'a relatively new company that is worth at least $10 billion' (https://www.dictionary.com/e/tech-science/decacorn/ search: 22 March 2020), formed on the basis of unicorn in a financial sense. We are aware, too, of a generalized novel sense of unicorn as any unique kind of entity, as witnessed by a new (as of September 2019) television show on CBS entitled "The Unicorn" about a somewhat unique type of person, namely a single male -a widower, as the storyline goes -who is "the perfect single guy, i.e., a 'unicorn': employed, attractive, and with a proven track record of commitment" (https://www.cbs.com/shows/the-unicorn/about/). It is easy to understand the development of both the financial sense and the generalized sense as metaphors based on the mythical-beast sense. disruption in communication due to a difference in the meaning attached to a particular word. 8 And, semantic variation can be a real issue in the VC/start-up world. A more serious, and less far-fetched example involves different senses associated with different funding stages. Some VCs focus on very early-stage start-ups, while some VCs focus on very mature startups, and other VCs focus various places in between. Two terms commonly used to characterize funding stage are angel and seed, used, for instance, to label rounds of funding (e.g., angel round). Although these terms have been around for many years, their meanings have been drifting over time. For example, over the course of two years, Clark-Joseph's firm (see footnote 6) raised three progressively larger rounds of funding, titled on the term sheets "seed", "seed v.2", and "seed v.3". Anecdotally, this drift is a significant source of miscommunication, confusion, and ultimately costly miscoordination. When an entrepreneur and a VC (unwittingly) use the word to mean different things, meetings are wasted, resources are fruitlessly expended, and meanwhile opportunities are missed. 6. On some parallels to our transactional model. In section 1, we refer to other views of communicative acts that are rather like the transactional model advocated here; in particular, Grice's maxim-based approach, the "Cheap Talk" model of Crawford and Sobel (1982), and the RSA model of Goodman and Frank (2016). 9 We are certainly Gricean in our general approach, as we assume that miscommunication lurks in situations where each interlocutor believes s/he is following Grice's maxims, so that every word that is contained in an exchange is treated as truthful, informative, relevant, and clear, and meaningful, importantly, according to each interlocutor's sense of the meaning. However, by way of drawing some distinctions between what we are advocating here and the formal models described in Section 1, we remind readers that the other models look to what makes for successful communication, whereas we focus on what can go awry in an attempt at successful communication. Moreover, those models are rooted more in game theory whereas we have more of an economic basis to our model. For instance, RSA assumes reasoning, i.e. "rational", beings, whereas we look more to the economist's notion of "a rational actor", in which actors employ beliefs and higher-order beliefs (beliefs about beliefs)their own beliefs and beliefs of others -in making decisions, and in taking action; at its most basic, "rational" here means making choices that are consistent with one's preferences, and which maximize utility. Additionally, we focus on the transaction itself rather than the inferential recursion that in RSA, for instance, leads to success in the conveying of a message. This transactional focus 8 In case a reader might think it unlikely that a VC and an entrepreneur in the world of start-ups and start-up funding would not be on the same page as to the meaning of a term like unicorn, we note that it is not necessarily obvious on which side of the fence the nonmythical-beast sense of unicorn originated. That is, we consider just as likely that an entrepreneur first innovatively metaphorically labelled him/herself as the "unicorn" VCs were looking for as that a VC declared him/herself to be looking for a metaphorical "unicorn" among entrepreneur candidates. Furthermore, third parties, such as reporters and analysts, may have introduced the novel usage, whereupon the divide in understanding need not align with the divide between sides of the market. 9 At the time we devised our transactional model, in preparation for the presentation in an Organized Session on Formal Approaches to Grammaticalization at the Annual Meeting of the Linguistic Society of America in New Orleans (5 January 2020), we were unaware of either the "Cheap Talk" or RSA models. It was only in the session itself that RSA came to our attention and the "Cheap Talk" model came before us as we delved more deeply into RSA. We have been happy to learn of them and are pleased to be part of this general intellectual thrust toward understanding acts of communication. means that while we operate in a Gricean-based model we do so with economic sensibility, including notions such as utility and cost, added in. 7. Back to the beginning: Change in grammatical semantics. Although our goal here has been largely to introduce economic thinking into aspects of linguistic analysis especially with regard to semantic variation and change, we started with a consideration of a purely linguistic question, and so we end in the same way. The questions in (2) essentially ask what it means to talk about semantic change, and our focus has been largely on change in lexical semantics. It is reasonable to ask, therefore, whether grammatical meaning can be brought under the same view as that espoused here. In particular, do the same transactional principles -and for that matter, the computations of the RSA, for instance -apply to variation in the grammatical meaning associated with specific morphemes and periphrastic combinations? Put in other words, this question could be framed as in (4): (4) On the first time a listener encounters a new "gram" (as a unit of encoded grammatical information), how does s/he interpret it? We suggest that a gram is intelligible to a listener if it is semantically compositional, so that the meaning of the gram is evident from the sum of the meaning of the parts of which it is composed. However, once, or rather if, such compositionality is no longer evident, then the listener is in the position of an entrepreneur not knowing what a VC has in mind with a particular locution. A related question is whether constraining "semantic" change is possible. It is reasonable to suppose that one cannot simply move from any form-meaning relation A to any other formmeaning relation B willy-nilly, any more than a given language A (where A is a possible human language) can turn unconstrainedly into any other possible human language (e.g. a Latin-like language turning into a Chinese-like language overnight). We note that if nothing else, economic costs will dampen entirely unconstrained changes, but that is not a system-based constraint. However, in a world where 'prayer' can turn into 'small round glass object', due to the societal milieu of the use of rosary beads to count prayers, then are any constraints possible? 10 One can ask if it matters if we are dealing with the possibly more restricted set of grammatical meanings as opposed to the quite likely less restricted set of lexical meanings (possible realworld referents). We are inclined to think that there may be no linguistic (system-based) constraints, but we leave this as an open question for now.
5,752.6
2020-06-09T00:00:00.000
[ "Economics" ]
On the Riesz Almost Convergent Sequences Space and Applied Analysis 3 Let us suppose that G is the set of all such matrices obtained by using all possible functions p. Now, right here, let us give a new definition for the set of almost convergent sequences that was introduced by Butković et al. 12 : Lemma 1.1. The set f of all almost convergent sequences is equal to the set ⋂ U∈G cU. Other one of the best known regular matrices is R rnk , the Riesz matrix which is a lower triangular matrix defined by rnk ⎧ Introduction and Preliminaries Let w be the space of all real or complex valued sequences.Then, each linear subspace of w is called a sequence space.For example, the notations ∞ , c, c 0 , 1 , cs, and bs are used for the sequence spaces of all bounded, convergent, and null sequences, absolutely convergent series, convergent series, and bounded series, respectively.Let λ and μ be two sequence spaces and A a nk an infinite matrix of real or complex numbers a nk , where n, k ∈ N {0, 1, 2, . ..}.Then, A defines a matrix mapping from λ to μ and is denoted by A : λ → μ if for every sequence x x k ∈ λ the sequence Ax { Ax n }, the A-transform of x, is in μ where By λ : μ , we denote the class of matrices A such that A : λ → μ.Thus, A ∈ λ : μ if and only if the series on the right side of 1.1 converges for each n ∈ N and every x ∈ λ, and we have Ax { Ax n } n∈N ∈ μ for all x ∈ λ.The matrix domain λ A of an infinite matrix A in a sequence space λ is defined by x k p n 1 0 uniformly in p . 1.3 If x ∈ f, then x is said to be almost convergent to the generalized limit α.When x ∈ f, we write f − lim x α.Lorentz 11 introduced this concept and obtained the necessary and sufficient conditions for an infinite matrix to contain f in its convergence domain.These conditions on an infinite matrix A a nk consist of the standard Silverman Toeplitz conditions for regularity plus the condition lim n → ∞ k |a nk − a n,k 1 | 0. Such matrices are called strongly regular.One of the best known strongly regular matrices is C, the Cesàro matrix of order one which is a lower triangular matrix defined by for all n, k ∈ N. A matrix U is called the generalized Cesàro matrix if it is obtained from C by shifting rows.Let p : N → N.Then, U u nk is defined by for all n, k ∈ N. Abstract and Applied Analysis 3 Let us suppose that G is the set of all such matrices obtained by using all possible functions p. Now, right here, let us give a new definition for the set of almost convergent sequences that was introduced by Butković et al. 12 : Lemma 1.1.The set f of all almost convergent sequences is equal to the set U∈G c U . Other one of the best known regular matrices is R r nk , the Riesz matrix which is a lower triangular matrix defined by Let K be a subset of N. The natural density δ of K is defined by where the vertical bars indicate the number of elements in the enclosed set.The sequence x x k is said to be statistically convergent to the number l if, for every ε, δ {k : |x k − l| ≥ ε} 0 see 13 .In this case, we write st − lim x l.We will also write S and S 0 to denote the sets of all statistically convergent sequences and statistically null sequences.The statistically convergent sequences were studied by several authors see 13, 14 and others . Let us consider the following functionals defined on ∞ : 1.8 In 15 , the σ-core of a real bounded sequence x is defined as the closed interval −q σ −x , q σ x and also the inequalities q σ Ax ≤ L x σ-core of Ax ⊆ K-core of x q σ Ax ≤ q σ x σ-core of Ax ⊆ σ-core of x , for all x ∈ ∞ , have been studied.Here the Knopp core, in short K-core, of x is the interval l x , L x .In particular, when σ n n 1, since q σ x L * x , σ-core of x is reduced to the Banach core, in short B-core, of x defined by the interval −L * −x , L * x . The concepts of B-core and σ-core have been studied by many authors 16, 17 . Recently, Fridy and Orhan 13 have introduced the notions of statistical boundedness, statistical limit superior or briefly st − lim sup , and statistical limit inferior or briefly st − lim inf , defined the statistical core or briefly st-core of a statistically bounded sequence as the closed interval st − lim inf x, st − lim sup x , and also determined the necessary and sufficient conditions for a matrix A to yield K-core Ax ⊆ st-core x for all x ∈ ∞ . Let us write Quite recently, B C -core of a sequence has been introduced by the closed intervals −T * −x , T * x and also the inequalities have been studied for all x ∈ ∞ in 18 . 1.11 Therefore, it is easy to see that B R -core of x is if and only if f − lim x . As known, the method to obtain a new sequence space by using the convergence field of an infinite matrix is an old method in the theory of sequence spaces.However, the study of the convergence field of an infinite matrix in the space of almost convergent sequences is new. The Sequence Spaces f and f 0 In this section we introduce the new spaces f and f 0 as the sets of all sequences such that their R-transforms are in the spaces f and f 0 , respectively, that is 2.1 With the notation of 1.2 , we can write f f R and f 0 f 0 R .Define the sequence y y k , which will be frequently used, as the R-transform of a sequence x x k , that is, If R C, which is Cesàro matrix, order 1, then the space f and f 0 correspond to the spaces f and f 0 see 18 .Suppose that G {G : G U • R, U ∈ G and R is Riesz matrix}.Then we have the following proposition. Proof.The proof is similar to the proof of Lemma 1.1 so we omit the details, see 12 . Consider the function • f : f → R, and define The function • f is a norm and f, • f is BK-space.The proof of this is as follows. Theorem 2.2.The sets f and f 0 are linear spaces with the coordinate wise addition and scalar multiplication that is the BK-space with the x f Rx f . Proof.The first part of the theorem can be easily proved.We prove the second part of the theorem.Since 1.2 holds and f and f 0 are the BK-spaces 1 with respect to their natural norm, also the matrix R is normal and gives the fact that the spaces f and f 0 are BK-spaces. Theorem 2.3.The sequence spaces f and f 0 are linearly isomorphic to the spaces f and f 0 , respectively. Proof.Since the fact "the spaces f 0 and f 0 are linearly isomorphic" can also be proved in a similar way, we consider only the spaces f and f.In order to prove the fact that f ∼ f, we should show the existence of a linear bijection between the spaces f and f.Consider the transformation T defined, with the notation of 2.2 , from f to f by x → y Tx.The linearity of T is clear.Further, it is trivial that x θ 0, 0, . . .whenever Tx θ and hence T is injective. Let y y k ∈ f, and define the sequence x x k by Then, we have which shows that x ∈ f.Consequently, we see that T is surjective.Hence, T is a linear bijection that therefore shows that the spaces f and f are linearly isomorphic, as desired.This completes the proof. Theorem 2.4.The spaces f and f 0 are not solid sequence spaces. and it is not hard to see by taking into account the definition Riesz matrix that shows that the multiplication ∞ f of the spaces ∞ and f is not a subset of f and therefore the space f is not solid.The proof for the space f 0 is similar to the proof of the space f, so we omit it. Theorem 2.5.Let the spaces f and f 0 be given.Then, 1 the inclusion f 0 ⊂ f holds and the space f is not a subset of the space ∞ , Proof. 1 Clearly, the inclusion f 0 ⊂ f holds.Let us consider the sequence given by This shows that to us, the space f is not a subset of the space ∞ . 2 If 1/R n ∈ c and r k ∈ 1 , then for all x ∈ ∞ we have Rx ∈ c.Therefore, since lim Rx f − lim Rx , we see that x ∈ f. In Theorem 2.6, we will use some similar techniques that are due to M óricz and Rhoades 19 . Theorem 2.6.Define the sequences α n and β n by for each n ∈ N. Since the part ii can be proved in a similar way, we prove only part i 2.9 This step completes the proof.Proof.Suppose that lim n → ∞ β 2 n − α 2 n 0. For each n, choose r to satisfy 2 r ≤ n ≤ 2 r 1 .We may write n in a dyadic representation of the form n r i 0 n i 2 i , where each n i is 0 or 1, i 0, 1, 2, . . ., r − 1, and n r 1.Then, since n j ∈ {0, 1}, and hence 2.11 Thus, If T is the lower triangular matrix with nonzero entries t nk n k 2 k 1 / n 1 , then, T is a regular matrix so that lim r → ∞ β 2 r −α 2 r 0. From the equality 2.12 , we see that lim n → ∞ β n − α n 0. Conversely, assume that x ∈ f.Then, since If we take n 2 p , then the proof of sufficiency is obtained.This step completes the proof. Some Duals of the Spaces f and f 0 In this section, by using techniques in 9 , we have stated and proved the theorems determining the β-and γ-duals of the spaces f 0 and f.For the sequence spaces λ and μ, define the set S λ, μ by With the notation of 3.1 , the α-, β-, and γ-duals of a sequence space λ, which are, respectively, denoted by λ α , λ β , and λ γ , are defined by λ α S λ, 1 , λ β S λ, cs , λ γ S λ, bs . 3.2 The following two lemmas are introduced in 20 which we need in proving Theorems 3.3 and 3.4. 3.4 Theorem 3.3.The γ-duals of the spaces f and f 0 are the set d 1 ∩ d 2 , where 3.5 Proof.Define the matrix T t nk via the sequence a a k ∈ w by for all n, k ∈ N. Here, Δ a k /r k a k /r k − a k 1 /r k 1 .By using 2.2 , we derive that Ty n , n ∈ N . 3.7 From 3.7 , we see that ax a k x k ∈ bs whenever x x k ∈ f if and only if Ty ∈ ∞ whenever y y k ∈ f.Then, we derive by Lemma 3.1 that which yields the desired result f γ f γ 0 Theorem 3.4.Define the set d 3 by Then, f β d 3 ∩ cs. Proof.Consider equality 3.7 , again.Thus, we deduce that ax a k x k ∈ cs whenever x x k ∈ f if and only if Ty ∈ c whenever y y k ∈ f.It is obvious that the columns of that matrix T , defined by 3.6 , are in the space c.Therefore, we derive the consequence from Lemma 3.2 that f β d 3 ∩ cs. Some Matrix Mappings Related to the Spaces f and f 0 In this section, we characterize the matrix mappings from f into any given sequence space via the concept of the dual summability methods of the new type introduced by Bas 4.1 It is clear here that the method B is applied to the R-transform of the sequence x x k while the method A is directly applied to the entries of the sequence x x k .So, the methods A and B are essentially different. Let us assume that the matrix product BR exists, which is a much weaker assumption than the conditions on the matrix B belonging to any matrix class, in general.The methods A and B in 4.1 , 4.2 are called dual summability methods of the new type if z n reduces to t n or t n reduces to z n under the application of formal summation by parts.This leads us to the fact that BR exists and is equal to A and BR x B Rx formally holds if one side exists.This statement is equivalent to the following relation between the entries of the matrices A a nk and B b nk : for all n, k ∈ N. Now, we give the following theorem concerning the dual matrices of the new type.x k ∈ f.Then, Ax exists.Therefore, we obtain from the equality as n → ∞ that Ax By, and this shows that A ∈ f : μ .This completes the proof. and μ is any given sequence space.Then, D ∈ μ : f if and if only E ∈ μ : f .Proof.Let x x k ∈ μ, and consider the following equality with 4.6 : which yields as m → ∞ that Dx ∈ f whenever x ∈ μ if and if only Ex ∈ f whenever x ∈ μ.This step completes the proof.Now, right here, we give the following propositions that are obtained from Lemmas 3.2 and 3.1 and Theorems 4.1 and 4.2. Proposition 4.3. Let A a nk be an infinite matrix of real or complex numbers.Then, 4.8 Proposition 4.4.Let A a nk be an infinite matrix of real or complex numbers.Then, 4.9 Proposition 4.5.Let A a nk be an infinite matrix of real or complex numbers.Then, 4.10 Proposition 4.6.Let A a nk be an infinite matrix of real or complex numbers.Then, 4.11 Core Theorems In this section, we give some core theorems related to the space f.We need the following lemma due to Das 25 for the proof of next theorem. Then, it is easy to see that the conditions of Lemma 5.1 are satisfied for the matrix sequence C. Thus, by using the hypothesis, we can write 5.6 Thus, by applying lim sup n → ∞ sup p∈N and using the hypothesis, we have τ * Ax ≤ L x ε.This completes the proof since ε is arbitrary and x ∈ ∞ . In particular r i 1 for all i since R is reduced to Cesàro matrix, see 18 . 5.11 On the other hand, since Hence, f − lim Ax st − lim x; that is, A ∈ S ∩ m, f reg , which completes the proof. Similarly, r i 1 for all i since R is reduced to Cesàro matrix, see 18 .Sufficiency: Conversely, assume that A ∈ S ∩ ∞ : f reg and 5.2 hold.If x ∈ ∞ , then β x is finite.Let E be a subset of N defined by E {l : x i > β x ε} for a given ε > 0. Then it is obvious that δ E 0 and x i ≤ β x ε if l / ∈ E. 5.16 By applying the operator lim sup n → ∞ sup p∈N and using the hypothesis, we obtained that τ * Ax ≤ β x ε.Since ε is arbitrary, we conclude that τ * Ax ≤ β x for all x ∈ ∞ , that is, B R − core Ax ⊆ st − core x for all x ∈ ∞ and the proof is complete.Now if r i 1 for all i, then R is reduced to Cesàro matrix and we have B C − core Ax ⊆ st − core x , ∀x ∈ ∞ if and only if A ∈ S ∩ ∞ : f reg 5.17 Theorem 4 . 1 . Let A a nk and B b nk be the dual matrices of the new type and μ any given sequence space.Then, A ∈ f : μ if and only if B ∈ f : μ and k ∈ N. Proof.Suppose that A a nk and B b nk are dual matrices of the new type, that is to say 4.2 holds and μ is an any given sequence space.Since the spaces f and f are linearly isomorphic, now let A ∈ f : μ and y y k ∈ f.Then, BR exists and a nk k∈N ∈ d 2 ∩ cs, which yields that b nk k∈N ∈ 1 for each n ∈ N. Hence, By exists for each y ∈ f, and thus letting m → ∞ in the equality m, n ∈ N, we have by 4.2 that By Ax, which gives the result B ∈ f : μ .Conversely, let {a nk } k∈N ∈ f β for each n ∈ N and B ∈ f : μ hold, and take any x Theorem 4 . 2 . Suppose that the entries of the infinite matrices D d nk and E e nk are connected with the relation Lemma 5 . 1 . 1 Theorem 5 . 2 . Let c c nj p < ∞ and lim n → ∞ sup p∈N |c nj p | 0.Then, there is a y y j ∈ ∞ such that y ≤ 1 and lim sup n → ∞ sup p∈N j c nj p y j lim sup n → ∞ sup p∈N j c nj p .5.B R −core Ax ⊆ K−core x for all x ∈ ∞ if and only if A ∈ c : f reg and lim Suppose first that B R − core Ax ⊆ K − core x for all x ∈ ∞ .If x ∈ f, then we have τ * Ax −τ * −Ax .By this hypothesis, we get−L −x ≤ −τ * −Ax ≤ τ * Ax ≤ L x .5.3 If x ∈ c, then L x −L −x lim x.So, we have f − lim Ax τ * Ax −τ * −Ax lim x, which implies that A ∈ c, f reg .Now, let us consider the sequence C c nj p of infinite matrices defined by p − c nj p . Theorem 5 . 5 . A ∈ S ∩ ∞ : f reg if and only if A ∈ c : f reg and lim for every E ⊆ N with natural density zero.Theorem 5.6.B R −core Ax ⊆ st−core x for all x ∈ ∞ if and only if A ∈ S ∩ ∞ : f reg and 5.2 holds.Proof.Necessity: Let B R − core Ax ⊆ st − core x for all x ∈ ∞ .Then, τ * Ax ≤ β x for all x ∈ ∞ , where β x st − lim sup x.Hence, since β x st − lim sup x ≤ L x for all x ∈ ∞ see 13 , we have 5.2 from Theorem 5.2.Furthermore, one can also easily see that−β −x ≤ −τ * −Ax ≤ τ * Ax ≤ β x , that is, st − lim inf x ≤ −τ * −Ax ≤ τ * Ax ≤ st − lim sup x.5.15If x ∈ S ∩ ∞ , then st − lim inf x st − lim sup x st − lim x.Thus, the last inequality implies that st − lim x −τ * −Ax τ * Ax f − lim Ax, that is, A ∈ S ∩ ∞ : f reg . ¸ar 21.Note that some researchers, such as, Bas ¸ar 21 , Bas ¸ar and C ¸olak 22 ,Kuttner 23, and Lorentz and Zeller 24 , worked on the dual summability methods.Now, following Bas ¸ar 21 , we give a short survey about dual summability methods of the new type. Theorem 5.3.B C −core Ax ⊆ K−core x for all x ∈ ∞ if and only if A ∈ c : f reg and Let A ∈ S ∩ ∞ , f reg .Then, A ∈ c, f reg immediately follows from the fact that c ⊂ S ∩ ∞ .Now, define a sequence t t k for x ∈ ∞ as Sufficiency: Conversely, suppose that A ∈ c, f reg and 5.8 holds.Let x ∈ S ∩ ∞ and st−lim x .Write E {k : |x k − | ≥ ε} for any given ε > 0 so that δ E 0. Since A ∈ c, f reg and f − lim k a nk 1, we have f
5,136.6
2012-05-09T00:00:00.000
[ "Mathematics" ]
Unmanned Aerial Vehicles for Three ‐ dimensional Mapping and Change Detection Analysis Unmanned Aerial Vehicles (UAVs), commonly known as drones are increas‐ ingly being used for three ‐dimensional (3D) mapping of the environment. This study utilised UAV technology to produce a revised 3D map of the University of Lagos as well as land cover change detection analysis. A DJI Phantom 4 UAV was used to collect digital images at a flying height of 90 m, and 75% fore and 65% side overlaps. Ground control points (GCPs) for orthophoto rectifica‐ tion were coordinated with a Trimble R8 Global Navigation Satellite System. Pix4D Mapper was used to produce a digital terrain model and an orthophoto at a ground sampling distance of 4.36 cm. The change detection analysis, using the 2015 base map as reference, revealed a significant change in the land cover such as an increase of 16,306.7 m2 in buildings between 2015 and 2019. The root mean square error analysis performed using 7 GCPs showed a horizontal and vertical accuracy of 0.183 m and 0.157 m respectively. This suggests a high level of accuracy, which is adequate for 3D mapping and change detection analysis at a sustainable cost. Introduction Three -dimensional (3D) mapping is described as the process of gathering locational information and possibly the attributes of features such as roads and buildings, and the representation of such in three dimensions (latitude, longitude and height above sea level) that can be interpreted by a user [1]. According to [2], 3D data acquisition in urban/sub -urban areas requires planning based on classic standards and the application of a variety of surveying methods. Conventional mapping techniques include triangulation, trilateration, traversing, levelling, and radiation [3,4], while modern techniques include total station surveying, aerial photogrammetry, Global Positioning Systems (GPS) or Global Navigation Satellite Systems (GNSS) surveys, and remote sensing (RS) [5]. In RS, the characteristics of objects or features can be studied using data collected from remote observation points [6]. Also, in photogrammetry, the size, shape and location of objects can be determined using measurement in a single image, a stereo pair or in a block of two or more images. However, measurements done in one image can only give two -dimensional (2D) coordinates, while 3D coordinates can be obtained using two or more images of the same object, captured from different positions. This is called stereoscopic viewing or stereo photogrammetry [7]. The images used in photogrammetry can be captured by various sensors on -board platforms such as manned aircraft (fixed wing or rotary) and satellites. However, operating these technologies involve huge investments beyond the resource base of many individuals. In response, Unmanned Aerial Vehicle (UAV) surveys have emerged as an alternative to the classical manned aerial photogrammetric surveys [1]. Several authors have highlighted the advantages of UAVs that include: rapid data collection for site inspection, surveillance, mapping, 3D modelling and real--time data capture with high geometric resolution under good atmospheric conditions [8][9][10][11]. Deliverables of UAV surveys that include orthophotos, 3D point clouds, and digital surface models have wide applications. For example, orthophotos are very useful for manual or semi -automatic feature extraction for map creation or updating [11], as well as for change detection studies [11,12]. Urban change detection is important for city monitoring and disaster response, as well as the updating of maps and three -dimensional models [13,14]. The applications of UAV data for change detection analysis present obvious advantages in terms of spatio -temporal resolution and cloudlessness compared to satellite RS images [15]. As noted by [16] and [17], images with high spatial resolution are specially preferred for the accurate processing of variations in urban land cover. This requirement can be met by UAVs through the provision of precise data measurements [18,19]. The setting of the present study is the University of Lagos in Lagos, Nigeria. This is an ideal location since there are no restrictions imposed on the UAV surveys conducted for the purpose of teaching and research. In addition, the university is growing both in terms of its human population and infrastructure, as evidenced by changes in land cover and developmental activities. Accordingly, there is a need for up -to -date spatial information of the university to support planning, development and decision making. This study utilised UAV technology to produce a three--dimensional map of the University of Lagos, which will be useful for comprehensive planning and design, development and monitoring as well as to understand the changes in land cover overtime so as to support informed decisions. To achieve this, a UAV-photogrammetric survey was carried out in the University of Lagos main campus, together with ground control survey using Differential GPS (DGPS). Study Area The study area is the University of Lagos, Akoka in the Lagos Mainland Local Government Area (LGA) of Lagos State, Nigeria. The campus is located between longitudes 3°23ʹ00ʺ E -3°24ʹ30ʺ E and latitudes 6°30ʹ00ʺ N -6°31ʹ30ʺ N. It is located in the centre of the metropolis of Lagos, and is low -lying with variations in terrain relief, which makes it susceptible to flooding during rainfall. Figure 1 presents the location map of the University of Lagos, Nigeria. The University of Lagos is bounded to the east by the Lagos Lagoon and surrounded by densely populated, built -up areas. The university has a growing student population, faculties and other academic and research infrastructure, including recreational facilities and parking spaces, religious buildings, restaurants, and residential buildings. This study area was selected primarily due to its accessibility and the avoidance of the need to obtain a flight permit, which may hinder or delay the execution of the project elsewhere. The size of the total image collection area was approximately 281 hectares. Equipment A low -cost UAV (DJI Phantom 4 Professional) with a maximum flight time of approximately 30 minutes was used for the aerial data acquisition. The camera lens of the UAV has an 84° field of view (FOV), and a focal length of 8.8 mm/24 mm (35 mm format equivalent). The gimbal has a controllable range of −90° to +30° and a maximum controllable angular speed of 90°/s. The UAV system also has an in -built positioning system that can track both Global Positioning System (GPS) and GLONASS satellites. With GPS positioning, the hover accuracy range is ±0.5 m (vertical) and ±1.5 m (horizontal). Prior to the survey, the UAV was checked and evaluated to ensure it was in good condition in accordance with any survey project involving instrumentations. A Trimble R8 GNSS was used to determine the coordinates of the Ground Control Points (GCPs). Table 1 presents the list of all the hardware and software used, such as GNSS receiver and solutions, UAV and Pix4D Mapper used in the study, and their respective functions. Field Survey Ground Control Points The field survey included the measurement of the Ground Control Points (GCPs) using dual frequency GNSS receivers, and acquisition of aerial photographs using DJI Phantom 4 UAV. Fourteen GCPs, namely XST 347, YTT 28/186, GME 2, PD UN01, etc., which are well spread out were signalised in the study area. Figure 2 shows a view of two signalised GCPs, YTT 28/186 and XST 347. White emulsion paint was used to make cross markings on the GCPs to make them visible from a high altitude. The dimension of the cross markings were approximately 80-100 cm in length and 15-20 cm in width. Static GNSS observation of about 30-40 minutes occupation time was then carried out on each of the GCPs. Figure 3 shows the spatial distribution of the signalised GCPs with Google Earth imagery as the backdrop. After the pre -marking and measurement of the GCPs, the study area was sub--divided into different missions within AutoCAD software and converted into Keyhole Markup Language (KML) files, which were subsequently loaded into Drone Deploy software for the flight mission. The UAV flight was conducted in a systematic manner based on the partitions to maximize efficiency. The following parameters were set in accordance with standard recommendations: a flying altitude of 90 m, a speed limit of 15 m/s, flight direction of 126°, and overlaps of 75% fore, and 65% side. With the stated specifications, it takes 8 to 15 minutes to cover 15 hectares, which means that the total area can be covered in less than 5 hours of continuous flight. However, continuous flight was not possible due to battery limitations, intervisibility and other logistics. During the survey, the UAV was monitored to ensure it was within the range of visual contact as any other aerial object must be avoided as noted by [7]. Changing weather conditions were also monitored to ensure quality data acquisition, since strong winds adversely affect the operation of UAVs. Summarily, once the UAV had passed its first waypoint (flight path), the next waypoint was initiated. Subsequently, the UAV acquired data following the pre -programmed flight paths. After passing the last waypoint on the flight lines, the UAV terminated its flight plan and initiated Figure 4 presents the 2015 land cover base map of the study area showing some prominent locations. Figure 5 shows a UAV landing/taking -off at two locations within the study area. Data Processing After the completion of the static GNSS survey, the GNSS receivers were taken to the laboratory for data downloaded and post -processing. The receivers were connected to the computer and the data was downloaded. Trimble Business Centre v3.5 was launched and the project settings including definition of the coordinate system/ datum to WGS84 UTM Zone 31N were set. The downloaded data was then imported into the software environment and the coordinates of the base station were inputted. The baseline processing was carried out and the report of the post -processed coordinates was generated and saved as a comma separated variable (CSV) file. Figure 6 shows the final plot of GCPs after the baseline processing. The raw photos captured from the UAV flights were downloaded and processed using Pix4D Mapper software. The key stages of the processing workflow adopted in Pix4D Mapper are already well documented in [20]. Essentially, the processing involved initial processing, which handles the image alignment; geo -rectification with GCPs using Pix4D's rayCloud editor, which enables one to justify the precise position of each GCP using the original 2D images, the processing of point cloud and mesh, and the generation of the orthophoto and Digital Terrain Model (DTM). After the processing was completed, the orthophoto that was generated was imported into ArcMap software (Fig. 7) where three main land cover features (buildings, water bodies and vegetation) were vectorised on the orthophoto and displayed in the form of a map. Accuracy Assessment In this study, a point -to -point validation method was used. This technique is based on data points that compare two point clouds directly as identified by [21,22]. The horizontal and vertical accuracy was validated using the root mean square error (RMSE) and the standard deviation (SD) of the coordinates of the GCPs compared with their positions on the image in line with the literature. The formula for RMSE is as follows: where N i are the observed values, N j are reference values and n is number of points. The horizontal RMSE (RMSE r ) is given as: Equation (2) can also be expressed as Equation (3) below: According to the National Standard for Spatial Data Accuracy [23], the method for evaluating horizontal accuracy is classified under two conditions. The first condition is that if RMSE X = RMSE Y , then: Horizontal Accuracy = 1.7308 · RMSE r The second condition is that if RMSE X ≠ RMSE Y and RMSE min /RMSE max is between 0.6 and 1.0 (where RMSE min is the smaller value between RMSE x and RMSE y and RMSE max is the larger value), circular standard error (at 39.35% confidence) may be approximated as 0.5 · (RMSE X + RMSE Y ) [23,24]. If error is normally distributed and independent in each of the x -and y -component and error, the accuracy value according to [23] may be approximated according to the following formula: Horizontal Accuracy = 2.4477 · 0.5 · (RMSE X + RMSE Y ) The vertical accuracy is calculated as follows: where: Some of the structures greater than 24.7 m include the Senate Building and the High -Rise residential quarters. The height variation can aid in decision making such as in this study for the determination of flying height, and planning of surface utility by the University of Lagos Works Department. An attribute query showed that 166 buildings out of 599 (27.71%) are higher than 15 m above mean sea level (m.s.l). This indicates that most of the structures within the university are at a low altitude. Forty -two structures are greater than or equal to 1,000 m 2 in area. This shows that only a small fraction out of the larger population of the university community can be accommodated as residents on campus. However, the university can maximise the limited land resource by building taller structures rather than smaller ones, to address the current challenges of insufficient staff and student accommodation. For a better understanding of the composite map, Table 2 shows the percentage of changes that occurred within the period of study (2015 to 2019). From the table, the statistics showed losses of 15,900 m 2 and 33,167 m 2 in the areas of water body and vegetation respectively. This translates to losses of 3,975.0 m 2 /yr and 8,291.8 m 2 /yr in the water body and vegetation respectively. If this rate is projected forward, it can be deduced that the loss is significant considering the short duration. The loss is attributed to the gain in buildings and bare land (open ground/grasses) as seen from the table. Within the same 4-year period, there was a 16,306.7 m 2 (4,076.7 m 2 /yr) increase in the area of buildings and 32,760.4 m 2 (8,190.1 m 2 /yr) increase in open ground/grasses. If the current rate of depletion of vegetation and water bodies is not controlled, this might have a negative impact on the university community in the near future. Accordingly, proper planning and consultation are essential in the allocation of spaces for developmental purposes in order to strike a balance between anthropogenic interventions and ecosystem/biodiversity sustainability. Validation of Results The validation dataset was based on the static post processed -GNSS derived coordinates of the GCPs. Table 3 shows the point -to -point geolocation details using ten and seven checkpoints indicated in the table as RMSE 10 and RMSE 7 respectively. The validation with ten checkpoints showed a lower accuracy, whereas the validation with 7 checkpoints yielded higher accuracy (low RMSEs) in the X, Y and Z coordinates. The values for RMSE 10 are: 0.978 m, 0.692 m, 2.293 m compared to RMSE 7 with 0.161 m, 0.206 m, and 0.080 m in the X, Y and Z coordinates respectively. While it is expected that the more the number of control points used, the better the result as observed in [20], in this case the reverse is the case. This is due to the large error contained in three of the control points, particularly in the X and Y coordinates. For example, GME 04 and GME 06 produced errors that are significantly larger than the expected and allowable error of 0.020 m. Therefore, the outlying points were excluded in the final analysis to arrive at a more reasonable result. It is to be noted that the resultant poor quality in the excluded points is attributed to variable factors such as the influence of weather. Accordingly, good climatic conditions devoid of strong wind is considered an important factor that must be taken into consideration in the application of UAV for 3D mapping. Summarily, the horizontal and vertical accuracy obtained are 0.183 m and 0.157 m respectively following the application of the second condition, without the assumption of normality and independence in data distribution stated in Section 2.5. Hence, this result suggests a high level of accuracy both in the horizontal and vertical positions on the orthophoto. This accuracy is considerably adequate for 3D mapping and other fit -for -purpose applications such as earth volume determination in engineering works and incidents monitoring and management. Discussion of Results Section 3.2 showed that the data acquired using the UAV technique has a considerable degree of accuracy both in planimetry and height, and as a result is adequate for many fit -for -purpose applications. For example, the acquired images can provide useful information for different applications such as engineering and environmental modelling and monitoring, and emergency assessment. In particular, the orthophotos can be used for mapping, volume computation, displacement analyses, erosion and flood management, disaster management in oil and gas and incident analysis. The DTM generated from the UAV data acquisition can allow quick multi -temporal volume estimations, without the problems of occlusion that can be encountered using terrestrial techniques. The elevation data obtained can be used for cut and fill calculation and development of new structures. For example, the contour plan can be used for engineering design in the university to aid the sand filling of swampy areas (land reclamation), construction of bridges and the design of a good drainage system. Not only that, the backend database will serve as a tool enabling queries to be performed to help in making informed decision on physical developments that include rapid assessment of changes in land use/land cover and new project sites. Conclusions This study conducted at the University of Lagos has shown that unmanned aerial vehicles (UAVs) have the ability to cover the large gap between terrestrial and aerial methods of mapping and data acquisition. The acquired and processed data from the flight mission can be utilised for different purposes, such as change detection analysis for land use/land cover, terrain modelling and infrastructural planning and monitoring. As would be expected, the result obtained showed that the terrain of University of Lagos is not flat but rather follows a non -uniform undulation as vividly captured from the generated contour map. This contour map is a good asset to the university community in future engineering planning and design. The composite map identified numerous artificial and natural ground features, such as roads, buildings, water bodies, and vegetation. This map is vital in understanding variations in land use and land development. In the 4-year period between 2015 and 2019, the change detection statistics showed losses of 15,900 m 2 (3,975 m 2 /yr) and 33,167 m 2 (8,291.8 m 2 /yr) in the areas of water body and vegetation respectively. There was a 16,306.7 m 2 (4,076.7 m 2 /yr) increase in the area of buildings and 32,760.4 m 2 (8,190.1 m 2 /yr) increase in open ground/grasses within the same period. The losses in water body and vegetation are attributed to the gain in buildings and bare land (open ground/grasses). Whilst few control points were used for validation, the results obtained showed that UAV 3D mapping is applicable for the acquisition of high -resolution data at a low -cost. Accordingly, it is recommended that this technology can be considered as a fast, safe and cost -effective method of mapping the university landed properties. Also, the topographical and base maps of the institution should be updated at regular intervals for the production of an up -to -date map using this technology. In the process of erecting new structures, the authority concerned should establish as much as possible a balance in the ecosystem, instead of its destruction. The university authorities should also take a bold step in sand -filling part of the inland water--logged area on campus. This would enable the authorities to reclaim such land for future development. Finally, the results from this study should be embraced by the university authorities for future planning and the design of engineering projects within the campus.
4,615.8
2021-01-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Visible Wavelength Astro-Comb The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters : We demonstrate a tunable laser frequency comb operating near 420 nm with mode spacing of 20-50 GHz, usable bandwidth of 15 nm and output power per line of ~20 nW. Using the TRES spectrograph at the Fred Lawrence Whipple Observatory, we characterize this system to an accuracy below 1m/s, suitable for calibrating high-resolution astrophysical spectrographs used, e.g., in exoplanet studies. Introduction A successful method for identifying planets in other star systems (exoplanets) is the radial velocity (RV) technique, which exploits small, periodic Doppler shifts in the spectrum of a target star to infer the existence of orbiting planets and determine characteristics such as orbital period and a lower limit to the planet's mass. Utilizing the RV method to find small, rocky exoplanets similar to the Earth in the habitable zone requires wavelength calibration precision and stability ~10 cm/s for the astrophysical spectrograph [1] used to measure the stellar spectrum. Because frequency combs provide a broad spectrum of highly stable and precisely known optical frequencies [2], which can be traced to a common reference such as GPS, they are an ideal wavelength calibration tool for astrophysical spectrographs. Spectral filtering of the frequency comb using a Fabry-Perot cavity [3,4] is generally required for the spectrograph to resolve individual comb lines from the frequency comb, though work is progressing to develop frequency combs with repetition rates high enough to eliminate the need for filter cavities [5][6][7]. Measurements of time-variation of the fundamental constants or of the expansion of the universe [8,9] may also be enabled by these types of calibration systems, with expansion of the universe requiring ~1 cm/s sensitivity over decadal timescales. In this work, we focus on calibrators for the near-term goal of exoplanet detection. Frequency comb systems optimized for astrophysical spectrograph calibration ("astrocombs") [10][11][12][13] have been demonstrated at several wavelength regions to date [11][12][13][14] though recently there is a trend toward demonstrating systems operating in the short wavelength end of the emission spectrum of Sun-like stars (400-700 nm) [10,14]. In addition to providing the largest photon flux, this wavelength region is rich with spectral features of high quality most suitable for use with the RV method [15]. Shorter wavelengths also avoid fringing effects in the spectrograph's charge coupled device (CCD) caused by weak etaloning in the silicon substrate. This effect can be a serious complication for data analysis at wavelengths longer than 700 nm since correction of these fringes to a precision better than 1% is challenging. Here, we demonstrate operation of an astro-comb operating near 420 nm with FWHM bandwidth of about 15 nm, center wavelength tunable over 20 nm, and spectral line spacing of 22 and 51GHz. Systematics of this calibration system including dispersion, higher order transverse modes of the cavity filter and alignment of the frequency comb and cavity filter transmission resonances have been characterized. Using a scanned-cavity technique [16], the accuracy of the astro-comb system has been compared to the underlying laser frequency comb. This type of broadband characterization of an astro-comb system is necessary to ensure that the optical filtering process has not left unevenly suppressed source comb lines on either side of each astro-comb line. Identifying where uneven suppression has occurred can allow use of a much wider bandwidth than would normally be possible if only exactly even suppression could be tolerated. Fig. 1. Description of an astro-comb in the frequency domain. (a) Frequency-doubled source comb lines and analytic description of the optical spectrum in terms of the repetition rate, frep, and the carrier envelope offset frequency, fCEO, of the underlying Ti:Sapphire laser. The frequency doubled spectrum is multiplied by the transmission function of the Fabry-Perot cavity, filtering the source comb spectrum which becomes the astro-comb spectrum shown in graph (b). The free spectral range of the transmission resonances of the filter cavity, ffilter, is ideally an integer multiple of the repetition rate of the source laser. However due to dispersion from air and mirror reflections, ffilter will generally be a function of frequency. In this example the integer m = 4 (typically m = 20-50 with our 1 GHz repetition rate laser) such that in (b) every 4th line (referred to as an astro-comb line) is fully transmitted. Because the spectrograph resolution (~>15 GHz) is generally larger than the repetition rate of the source comb, finite suppression of the source comb modes closest every astro-comb line results in a power weighted average frequency, fR, to be reported by the spectrograph. The difference ∆f must be recovered to realize the full accuracy of the astro-comb. Calibration sensitivity Astrophysical spectrographs used for high precision spectroscopy are generally configured as cross-dispersion spectrographs which disperse the input spectrum in two orthogonal directions to increase resolution and optical bandwidth. Focusing this cross-dispersed beam on a CCD creates a two dimensional image which transforms the frequency (or wavelength) of the input light to a physical position on the CCD. The ideal calibrator for such a spectrograph would be a single light source whose spectrum is composed of an ensemble of individual incoherent emitters, each with equal power and separated in frequency by slightly more than the spectrograph resolution [1]. Each emitter should also have a spectral width much less than the spectrograph resolution. Knowing the frequency of each component of this calibration spectrum allows recovery of the spectrograph's mapping of frequency (or wavelength) to CCD position as well as the point spread function of the spectrograph for all frequencies. Frequency combs based on mode locked lasers are currently the closest analogue to this ideal calibrator with the exceptions that their optical bandwidth is typically much narrower than the spectrometer's bandwidth and the frequency spacing between spectral features (comb lines) is much less than the resolution of the spectrometer. Overcoming the bandwidth limitation can be partially achieved by broadening the spectrum from the frequency comb in a nonlinear fiber or crystal and/or shifting the spectrum to a different wavelength range using a similar process. A Fabry-Perot cavity can be used to selectively remove all but every m th comb line in the output spectrum of the frequency comb to generate a new comb whose comb lines are resolvable by the spectrograph, Fig. 1. The shot noise limited velocity uncertainty of a radial velocity measurement using the Doppler shift of a single spectral feature has previously been discussed as [1,17] where A is a number close to one related to the specific line shape of the feature in question, δν 1/2 is the FWHM of the feature, SNR is the peak signal to noise ratio due to shot noise for the feature usually measured at peak center, n is the number of spectrograph CCD pixels sampling the spectral feature in the grating dispersion direction and λ is the center wavelength of the feature. Using all N emission features of a target star or all the lines from an astrocomb, the value given by (1.1) may be reduced by 1/N 1/2 . By matching the frequency spacing of the astro-comb lines to a value just larger than the resolution of the spectrograph (α≈3) [1], and defining N as the ratio between the FWHM frequency bandwidth of the calibration spectrum δν input and the astro-comb repetition rate, we can re-write (1.1) including the spectrograph resolving power R = λ/δλ as In addition to the benefits achieved by shifting the center wavelength of the RV measurement to shorter wavelengths as stated in the introduction, there is an additional precision benefit of ~1 / 2 gained from frequency doubling the Ti:Sapphire laser. However large increases in the bandwidth of the calibration source are necessary to see significant gains in calibration precision for a given spectrograph with resolving power R. The sensitivity gain between calibrating 15 nm and calibrating the full octave spanning spectrum of the Ti:Sapphire laser (>300 nm) is a factor of ~4.5. The difficulty of achieving comb line filtering over 300 nm generates far greater challenges than those presented here [3,13,18]. Visible frequency comb generation Our visible wavelength astro-comb ( Fig. 2) follows the general approach outlined above by using a frequency doubled octave spanning Ti:Sapphire laser which is spatially and spectrally filtered before its spectrum is fiber coupled to the astrophysical spectrograph for calibration of the spectrograph's frequency (or wavelength) to CCD position mapping. Currently, nonlinear frequency conversion is the only method for generating a frequency comb in the 400-600 nm wavelength region, as no broadband laser materials in this region are known. We have chosen frequency doubling of a broadband Ti:Sapphire laser, Fig. 3, to provide a source comb at 400 nm, because we have previously [19] constructed stable and high power Ti:Sapphire lasers that are capable of operating unattended on day long timescales. Absolute referencing of such lasers is also more direct than with other systems, as Ti:Sapphire lasers can generate optical spectra directly from the laser cavity that cover more than one octave; this allows direct access to the carrier envelope offset frequency, f ceo , control of which is required if all optical frequencies from the frequency comb are to be known. Optical frequencies of comb lines from femtosecond lasers are uniquely described by f = f ceo + nf rep , where active control of both f ceo and f rep allows unique definition of the absolute frequency of each comb line. In the system depicted in Fig. 2, f rep is detected using a PIN photodiode to recover the pulse repetition rate which is then phase locked to a low noise radio frequency oscillator using a low bandwidth feedback loop to control the length of the laser cavity. Control of the f ceo frequency is achieved by modulation of the pump laser intensity using an AOM. A phase locked loop which generates the control signal for the AOM matches the f ceo frequency detected using the f-2f method with a second low noise frequency synthesizer. Both synthesizers are phase locked to a commercial rubidium frequency reference to enable the entire chain to have a fractional frequency stability <10 −11 over time scales from seconds to days. Heterodyne measurements with a narrowband diode laser at 408 nm reveal a maximum linewidth of the optical comb lines <5 MHz, limited by the linewidth of the diode laser. The astro-comb system must be very robust and reliable over many hours for use in an observatory environment. We have, therefore, chosen to use a 1mm BBO crystal for frequency conversion rather than a photonic crystal fiber. Using 300 mW from the Ti:Sapphire laser as input to the BBO crystal, 18 mW average power and 15 nm FWHM spectrum centered at 420 nm is typically generated. Rotation of the BBO crystal in the laser focus allows tuning of the generated spectrum over a FWHM bandwidth of nearly 50 nm as measured directly after the BBO crystal. For reasons explained below, the tuning bandwidth is reduced to approximately 20 nm after transmission through the Fabry-Perot cavity, Fig. 4(A). For optimal spectral filtering of the source comb, it is necessary to ensure that the intensity distribution of the beam sent to the Fabry-Perot cavity is well described by a fundamental order Hermite-Gaussian mode. Due to both the spatial chirp of the Ti:Sapphire laser output and spatial walk-off which occurs during second harmonic generation in the BBO crystal, the beam at the output of the BBO crystal contains many higher-order spatial terms. A singlemode fiber (mode field diameter ~2.9 µm) placed in the Fourier plane of a 4F lens system, leads to a transverse intensity distribution at the output of the 2nd lens which is approximately a lowest order Gaussian. While a using a single-mode fiber as a spatial filter reduces the power available after the doubling process by a factor of almost 5, the spatial profile of the output beam becomes a constant and only the spectral content and transmitted fraction can be affected by changes in beam pointing from the laser. Round-trip dispersion due to reflection from the Fabry-Perot cavity mirrors is plotted on the right axis, and cavity transmission for an input spectrum centered at 420 nm is plotted on the left axis taking into account air dispersion as well as mirror dispersion and reflectivity. Spectral filtering The highest precision of calibration occurs when the spacing between spectral features in the spectrum used to calibrate a spectrograph is approximately three times the resolution of the spectrograph [1], ensuring high contrast coverage of the spectrograph's CCD. Spectral filtering of the source comb is currently necessary to achieve this ideal spacing since the frequency spacing between comb lines of all current mode locked lasers suitable for spectrograph calibration is smaller than the resolution of most existing astrophysical spectrographs. The most straightforward way to achieve this filtering is by using a two mirror Fabry-Perot cavity where the distance between the mirrors, L, is set such that the free spectral range of the filter cavity, c/2L, is an integer multiple of the source comb repetition rate, Fig. 1. Such a filter only transmits every m th source comb line, suppressing the remaining source comb lines to varying degrees resulting in a transmitted spectrum which more closely approximates the ideal. Dispersion from air or mirror reflections causes a deviation from this integer spacing resulting in a change in amplitude of source comb line transmission. This difference in transmitted amplitude is generally small and is manifested primarily as an asymmetry in the suppression of the two source comb lines on either side of the m th transmitted source comb line (astro-comb line), causing an apparent shift in the overall line center recovered by the spectrograph. This shift occurs because the spectrograph's limited resolution causes all three source comb lines to be recovered as a power-weighted sum rather than as individual lines. The shift in frequency of the recovered center of gravity caused by asymmetrical line suppression as ∆f = (P n + 1 -P n-1 )f Rep /(P n-1 + P n + P n + 1 ) which is a truncated version of the formula described above in Fig. 1. A recently developed scanned cavity technique [16] can be used to identify and remove these systematic shifts from the final calibration which is critical since any dielectric mirror coating will contain both small errors and a systematic divergence of the dispersion at the edges of its working range. In this work, plane parallel mirrors are used to construct the Fabry-Perot cavity filter to reduce distortion of the astro-comb caused by transmission of source comb light through the filter cavity as a result of excitation of higher order transverse modes (HOM) at frequencies far from the longitudinal, c/2L, resonances. The reduction in transmission via HOM is a result of three factors: First, the frequency offset for the higher order transverse modes is typically <100 MHz for plane parallel cavities. Second, due to the high source comb repetition rate of 1 GHz, transmission of a source comb line adjacent to a desired astro comb line, can only occur via transverse modes with very high mode order (10 or more). Third, the difference in nominal transverse mode size between the TEM (0,0) mode and higher order transverse modes prevents significant coupling into higher order transverse modes once the coupling is optimized for the TEM (0,0) mode. The combination of these three factors reduces the side mode transmission via higher order transverse modes to a level similar to what would be expected from a hypothetical cavity possessing only the longitudinal c/2L modes. To verify this chain of arguments experimentally, Fig. 5(A) shows the measured transmission of the Fabry-Perot filter cavity using a swept frequency laser diode. Asymmetry of the measured transmission, due to transmission via higher order transverse modes is clearly visible on the high frequency side of the transmission fringe in Fig. 5(A). However, we emphasize that the benefit of this approach is the elimination of transmission maxima in a similar sweep spanning the full 22 GHz free spectral range, Fig. 5(B). The broadening of the main transmission fringe in Fig. 5(A) occurs mainly because the flatness deviations are not spherical, though this type of asymmetry can be compensated using the technique described in [16]. Misalignment in the frequency domain of the Fabry-Perot cavity filter's transmission resonances and the source comb lines caused by air and mirror dispersion is also a limiting factor for the bandwidth of the source comb which can be successfully filtered. The mirrors used here are dielectric Bragg reflectors with slightly modified layer thicknesses using the numerical routine described in [20]. The layer thicknesses are optimized for minimum accumulated phase on reflection over most of the high reflectivity bandwidth of the initial Bragg layer stack. The resulting phase from reflection as a function of wavelength is plotted in Fig. 4(B) with the portion due to a constant group delay removed. Due to the fast variation of round trip phase mainly due to air dispersion [13] at the edges of the reflectivity bandwidth, the bandwidth over which the input comb lines and the Fabry-Perot cavity's transmission resonances are well aligned is limited, reducing the overall transmission and resulting in an astro-comb bandwidth of ~20 nm, Fig. 4(A). Astro-comb operation Initially, alignment of the comb lines from the source comb with the transmission resonances of the Fabry-Perot cavity filter is achieved by maximizing the transmission of the source comb through slow change of the filter cavity length. Once the cavity is set to the nominally correct length, the cavity length is modulated by ~3 pm at a rate of 100 kHz, causing modulation of the power transmission of the cavity. Demodulating the detected transmission using a lock in amplifier, the resulting signal is used to control the length of the cavity using a second piezo mounted mirror. Two piezos are used so that one of the piezos can be optimized for generating a high frequency modulation allowing a higher feedback bandwidth. The second piezo can then be of slightly larger physical size enabling larger deflection and thus correction of larger amplitude cavity length errors at the cost of much larger piezo capacitance and longer response time. For calibration of an astrophysical spectrograph, it is not strictly necessary that the absolute frequency of each astro-comb line to be known before the astro-comb light is sent to the spectrograph. The astro-comb spectrum as measured on the spectrograph can be compared to the spectrum from an iodine cell or a thorium argon lamp, the emission lines of which are well tabulated and known to better than 1 GHz. Once one line of the filtered astro-comb is identified by this method, all of the other lines in that spectrograph order can be identified and the full accuracy of each individual astro-comb line can be exercised. In our previous NIR (near infrared) astro-comb studies [11], a single fiducial reference laser was used as a frequency marker to identify the frequency of all astro-comb lines recorded by the spectrograph. However, because the frequency difference between spectrograph orders is not constant, and for some orders the dispersed spectrum extends beyond the collection area of the CCD, the loss of information between orders can prevent indexing of astro-comb lines from one order to the next. Thus in general a fiducial wavelength marker is required for each order of the spectrograph, which an iodine cell or thorium argon lamp can easily provide. Astro-comb system characterization We have installed a second astro-comb system nearly identical to the one described above at the Fred Lawrence Whipple Observatory (FLWO). Using a Menlo Systems 1 GHz octave spanning Ti:Sapphire laser as the source comb source as well as a BBO crystal, Fabry-Perot filter cavity, and filter cavity stabilization scheme identical to that described above, we are typically able to generate an average output power from the astro-comb of 10-30 µW (~10-30 nW per astro-comb line) with a line spacing of 51 GHz. The TRES spectrograph [21] has a resolving power of R = λ/δλ = 40,000 and spectral coverage from 390 to 900 nm. A 100 um multimode step-indexed fiber couples light from the calibration system where astro-comb light is injected to the spectrograph optical bench to where it is dispersed by an echelle grating and a prism to produce a 2 dimensional spectrum consisting of 51 orders of approximately 10 nm each. The dispersed light is then re-imaged onto a two dimensional CCD array with a resolution of ~0.01 nm, sampled by ~6 pixels in the dispersion direction. The CCD (E2V 42-90) is read out with added noise of less than 3 counts per pixel. Spectra are arranged as 1 dimensional data through the use of a halogen flat-fielding lamp which maps the spectrograph orders and compensates for pixel to pixel gain variations. Approximately six pixels in the cross-dispersion direction orthogonal to the grating dispersion direction contained counts from each order which are added together to produce one dimensional spectra. In the data presented in Fig. 6, the wavelength calibration was provided by a standard thorium argon calibration lamp also attached to the spectrograph calibration system. For exposure times compatible with the spectrograph control software (~30 sec), the output of the astro-comb was attenuated by 20 dB to avoid saturating the CCD. Fig. 6. Extracted one dimensional spectrum of full astro-comb spectrum. The FWHM of the spectrum is approximately 10 nm, with approximately 50,000 peak counts in 10 seconds on the spectrograph. Inset: example of filtered comb lines with mode spacing of 51 GHz. The diode reference laser is used for characterization of the filter cavity transmission profile described in the text. To achieve ~10 cm/s precision required for detection and characterization of Earth-like exoplanets over several year observation times, an in situ technique may be used to characterize the cavity filter [16] and determine the apparent shift in the center of gravity of each astro-comb line from its nominal position caused by cavity dispersion and transmission resonance asymmetry and thus recover accurate astro-comb calibrations. Briefly, this technique uses a diode laser phase locked to the stabilized source comb to which the cavity filter is then locked using the Pound-Drever-Hall technique. After initially aligning the comb lines from the source comb and the transmission resonances of the cavity filter, the offset frequency between the diode laser and frequency comb is changed in discrete steps and the spectrum transmitted through the cavity is recorded. This type of broadband characterization of an astro-comb system is necessary to ensure the astro-comb spectrum is fully understood prior to its use as a calibrator. We have employed this calibration technique with the system installed at the FLWO, Fig. 7. From this data we find good agreement between the measured group delay dispersion (GDD) and that expected from the mirrors, Fig. 3(B), along with few mrad oscillations in the recovered offset due to weak etalon effects between the mirror substrates. Because this etalon effect can be straightforwardly circumvented by using wedged substrates for the cavity filter mirrors, we have chosen to low pass filter the offset data in Fig. 7. A four point binomial filter with a cut off frequency of 0.1 nm −1 was chosen for removal of the high frequency oscillations to reveal the true potential of the system. Incorporation of the calibration data into astronomical observations will also require a robust model of the cavity transmission asymmetry, which was not accounted for in Fig. 7, as well as the line to line amplitude variation of the source comb. Fig. 7. Offset of recovered astro-comb line center caused by misalignment of filter cavity transmission maximum and the source comb lines as measured by the TRES spectrograph (black circles). Error bars at +/−15 cm/s on the recovered offset correspond to the uncertainty of the laser diode frequency used to lock the filter cavity and astro-comb (blue dotted lines). Breaks in the data set at 417 nm and again at 422 nm correspond to transitions between spectrograph orders as recovered from the CCD. The glitch at 408 nm is caused by amplitude to phase conversion by slight saturation of the CCD by the laser diode. Increased fluctuation in the trace above 424 nm and below 402 nm is due to the reduced signal to noise ratio at the edges of the astro-comb spectrum. The offset of recovered astro-comb lines has been low pass filtered to remove oscillations caused by mild etalon effects from the mirror substrates, see text above. Straightforward summation of this solution to the wavelength to pixel mapping for the spectrograph will allow sub meter per second accuracy calibration of the TRES spectrograph, to be reported in an upcoming publication. An estimate of the lower bound for calibration precision can be made using (1.2) which gives precision of ~30cm/s for the signal to noise ratio of 200 observed in fits to comb spectra. Conclusion Development of visible wavelength frequency combs with wide mode spacing will have broad applicability as high-accuracy and high-stability wavelength calibrators for astrophysical spectrographs. The absence of broadband laser materials in the 400-600 nm range leaves nonlinear frequency conversion as the means to realize such wavelength calibrators. We frequency-doubled a Ti:Sapphire laser frequency comb to provide a tunable visible wavelength astro-comb operating near 420 nm, with a spectral line spacing variable over tens of GHz (up to 51 GHz in the present demonstration), a usable spectrum of 15 nm, and an output power of up to 20 nW per line. Systematic shifts in the resulting astro-comb will not limit the final calibration accuracy [16]. Using astro-combs as calibration sources, instability in spectrographs used for radial velocity spectroscopy such as the TRES spectrograph at the Fred Lawrence Whipple Observatory can be reduced to below the 1m/s level to enable highresolution exoplanet studies. In principle, this approach could be adapted to other wavelengths within the Ti:Sapphire lasers' frequency-doubled spectrum, potentially covering the wavelength range 370-550 nm, depending on the properties of the spectrograph in question.
6,056.4
2010-08-30T00:00:00.000
[ "Physics", "Geology" ]
Crovirin, a Snake Venom Cysteine-Rich Secretory Protein (CRISP) with Promising Activity against Trypanosomes and Leishmania Background The neglected human diseases caused by trypanosomatids are currently treated with toxic therapy with limited efficacy. In search for novel anti-trypanosomatid agents, we showed previously that the Crotalus viridis viridis (Cvv) snake venom was active against infective forms of Trypanosoma cruzi. Here, we describe the purification of crovirin, a cysteine-rich secretory protein (CRISP) from Cvv venom with promising activity against trypanosomes and Leishmania. Methodology/Principal Findings Crude venom extract was loaded onto a reverse phase analytical (C8) column using a high performance liquid chromatographer. A linear gradient of water/acetonitrile with 0.1% trifluoroacetic acid was used. The peak containing the isolated protein (confirmed by SDS-PAGE and mass spectrometry) was collected and its protein content was measured. T. cruzi trypomastigotes and amastigotes, L. amazonensis promastigotes and amastigotes and T. brucei rhodesiense procyclic and bloodstream trypomastigotes were challenged with crovirin, whose toxicity was tested against LLC-MK2 cells, peritoneal macrophages and isolated murine extensor digitorum longus muscle. We purified a single protein from Cvv venom corresponding, according to Nano-LC MS/MS sequencing, to a CRISP of 24,893.64 Da, henceforth referred to as crovirin. Human infective trypanosomatid forms, including intracellular amastigotes, were sensitive to crovirin, with low IC50 or LD50 values (1.10–2.38 µg/ml). A considerably higher concentration (20 µg/ml) of crovirin was required to elicit only limited toxicity on mammalian cells. Conclusions This is the first report of CRISP anti-protozoal activity, and suggests that other members of this family might have potential as drugs or drug leads for the development of novel agents against trypanosomatid-borne neglected diseases. Introduction The pathogenic trypanosomatids from the genera Leishmania and Trypanosoma infect over 20 million people worldwide, with an annual incidence of ,3 million new infections in at least 88 countries. An additional 400 million people are at risk of infection by exposure to insect vectors harboring parasites [1][2][3]. Leishmania and trypanosome infections predominate in poorer nations, and are considered neglected diseases that have ''fallen below the radar of modern drug discovery'' [4]. Leishmania parasites cause five different disease formscutaneous (CL), mucocutaneous (MCL), diffuse cutaneous leishmaniasis (DCL), post-kala-azar dermal leishmaniasis (PKDL) and visceral leishmaniasis (VL, also known as 'black fever' or 'kalaazar' in India) [5]. VL is the most severe and debilitating form of leishmaniasis, and can be fatal if left untreated. First-line treatment for leishmaniasis is based on pentavalent antimonials such as meglumine antimoniate (Glucantime) and sodium stibogluconate (Pentostan). Amphotericin B and pentamidine are used as secondline drugs in patients resistant to first-line therapy [1,6]. Recently, miltefosine has been used in India as part of combination therapy regimens to treat VL, and the largest increase in miltefosine activity was seen in combination with amphotericin B [7,8]. There are two forms of HAT (also known as sleeping sickness), caused by two subspecies of T. brucei parasites (T. b. gambiense or T. b. rhodesiense). Both HAT forms culminate in parasite invasion of the central nervous system, with gradual nervous system damage if untreated. The currently used anti-HAT drugsmelarsoprol, eflornithine, pentamidine, and suramin -are highly toxic and have lost efficacy in several regions. Also, treatment is difficult to administer in resource-limiting conditions, and often unsuccessful [9,10]. Chagas' disease, caused by T. cruzi, affects the cardiovascular, gastrointestinal, and nervous systems of human hosts and has become, in recent decades, a worldwide public health problem due to travelers and migratory flow [2,11]. Chagas' disease chemotherapy is based on the use of nifurtimox and benznidazole, two very toxic nitroheterocyclic compounds with modest efficacy (especially against late stage chronic disease), and 'plagued' by the emergence of drug resistance [12]. Given the high toxicity and limited efficacy of current treatments for leishmaniasis, Chagas' disease and HAT, the development of novel chemotherapeutics against these neglected diseases is essential. Animal venoms and poisons are natural libraries of bioactive compounds with potential to yield novel drugs or drug leads for pharmacotherapeutics [13]. In particular, snake venoms have proven to be interesting sources of potential novel agents against neglected diseases, including Chagas' disease [14][15][16][17] and leishmaniasis [18][19][20][21][22][23]. CRISP amino acid sequences have high degree of sequence identity and similarity, and include a highly conserved pattern of 16 cysteine residues which form 8 disulfide bonds [34]. Ten of these cysteine residues form an integral part of a well-conserved cysteinerich domain at the C-terminus, although CRISP N-terminal sequences are overall more conserved than other regions of these proteins [33][34][35]. Snake venom CRISPs belong to the CRISP-3 subfamily [36], one of four subgroups of CRISPs, according to amino acid sequence homology. Most biological targets of snake venom CRISPs described to date are ion channels [37][38][39][40][41][42][43], although the functions and the molecular targets of most snake venom CRISPs remain to be determined. Some snake venom CRISPs had their biological activities tested on crickets and cockroaches [35]. Snake venom CRISPs have been shown to block the activity of L-type Ca 2+ and/or K + -channels and also of cyclic nucleotide-gated (CNG) ion channels, thereby preventing the contraction of smooth muscle cells [26,37,[40][41][42][43]. The CRISPs catrin, piscivorin and ophanin, from the snake Crotalus atrox, caused moderate blockage of L-type calcium channels, partially inhibiting the contraction of smooth fibers from mouse caudal arteries [26]. The Philodryas patagoniensis (green snake) CRISP patagonin was capable of generating myotoxicity when injected into the gastrocnemius muscle, but did not induce edema formation, haemorrhage or inhibition on platelet aggregation [44]. Despite their myotoxicity, there are no reports of CRISP protein lethality to mice, in concentrations of up to 4.5 mg/kg [35,45], and patagonin did not induce systemic alterations in mice, or histological changes in tissues from the cerebellum, brain, heart, liver and spleen [44]. In a previous publication, we showed that crude venom from the rattlesnake Crotalus viridis viridis had anti-parasitic activity against all forms of T. cruzi, and could be a valuable source of molecules for the development of new drugs against Chagas' disease [46]. In search for the molecular source of the anti-parasitic activity found in Cvv crude venom, we purified a Cvv CRISP that will be henceforth referred to as 'crovirin'. Here, we describe the purification, biochemical characterization and biological activity of crovirin against pathogenic trypanosomatids parasites and mammalian cells, showing that crovirin is active against infective developmental forms of trypanosomes and Leishmania, at doses that elicit no or minimal toxic effects on human cells. Venom samples, compounds and reagents Crude venom from the rattlesnake Crotalus viridis viridis (Cvv) and adjuvants such as parasites growth media, were purchased from Sigma-Aldrich Chemical Co ( Purification of Crovirin Lyophilized Cvv venom (10 mg) was dissolved in 1 ml of 20 mM Tris-HCl, 150 mM NaCl, pH 8.8 and centrifuged at Author Summary The pathogenic trypanosomatid parasites of the genera Leishmania and Trypanosoma infect over 20 million people worldwide, with an annual incidence of ,3 million new infections. An additional 400 million people are at risk of infection by exposure to parasite-infected insects which act as disease vectors. Trypanosomatid-borne diseases predominant in poorer nation and are considered neglected, having failed to attract the attention of the pharmaceutical industry. However, novel therapy is sorely needed for Trypanosoma and Leishmania infections, currently treated with 'dated' drugs that are often difficult to administer in resource-limiting conditions, have high toxicity and are by no means always successful, partly due to the emergence of drug resistance. The last few decades have witnessed a growing interest in examining the potential of bioactive toxins and poisons as drugs or drug leads, as well as for diagnostic applications. In this context, we isolated and purified crovirin, a protein from the Crotalus viridis viridis (Cvv) snake venom capable to inhibiting and/or lysing infective forms of trypanosomatid parasites, at concentrations that are not toxic to host cells. This feature makes crovirin a promising candidate protein for the development of novel therapy against neglected diseases caused by trypanosomatid pathogens. 5,000 g for 2 min. The supernatant was applied onto a reverse phase analytical C 8 column (5 mm, 25064.6 mm) (Kromasil, Sweeden), previously equilibrated with the same buffer. Venom proteins were separated by reverse phase HPLC (Shimadzu, Japan). Fractions (0.7 ml/tube) were collected at a 1 ml/h flowrate. A linear gradient of water/acetonitrile containing 0.1% trifluoroacetic acid (TFA) was used. The elution profile was monitored by absorption at 280 nm, and the molecular homogeneity of the relevant fractions was verified by SDS-PAGE. Fractions containing protein peaks were dried in a Speed-Vac (Savant, Thermo Scientific, USA) and resuspended in distilled water prior to protein quantification by the Bradford method. Molecular mass determination was performed by MALDI-TOF and by electrospray ionization (ESI) mass spectrometry using a Voyager-DE Pro and a QTrap 2000 (both from Applied Biosystems), respectively. In-gel digestion Protein bands were excised from Coomassie Brilliant Bluestained SDS-PAGE gels and cut into smaller pieces, which were destained with 25 mM NH 4 HCO 3 in 50% acetonitrile for 12 h. The pieces obtained from the non-reducing gels were reduced in a solution of 10 mM dithiothreitol and 25 mM NH 4 HCO 3 for 1 h at 56uC, and then alkylated in a solution of 55 mM iodoacetamide and 25 mM NH 4 HCO 3 , for 45 min in the dark. The solution was removed, the gel pieces were washed with 25 mM NH 4 HCO 3 in 50% acetonitrile, and then dehydrated in 100% acetonitrile. Finally, all pieces from reducing and non-reducing gels were airdried, rehydrated in a solution of 25 mM NH 4 HCO 3 containing 100 ng of trypsin, and digested overnight at 37uC. Tryptic peptides were then recovered in 10 ml of 0.1% TFA in 50% acetonitrile. Nano LC-MS/MS mass spectrometry The peptides extracted from gel pieces were loaded into a Waters Nano Acquity system (Waters, MA, USA) and desalted online using a Waters Symmetry C18 180 mm620 mm, 5 mm trap column. The typical sample injection volume was 7.5 ml, and liquid chromatography (LC) was performed by using a BEH 130 C18 100 mm6100 mm, 1.7 mm column (Waters, MA, USA) and eluting (0.5 ml/min) with a linear gradient of 10-40% acetonitrile, containing 0.1% formic acid. Electrospray tandem mass spectra were performed in a Q-Tof quadrupole/orthogonal acceleration time-of-flight spectrometer (Waters, Milford, MA) linked to a nano ACQUITY system (Waters) capillary chromatograph. The ESI voltage was set at 3300 V, the source temperature was 80uC and the cone voltage was 30 V. The instrument control and data acquisition were conducted by a MassLynx data system (Version 4.1, Waters), and experiments were performed by scanning from a mass-to-charge ratio (m/z) of 400-2000 using a scan time of 1 s, applied during the whole chromatographic process. The mass spectra corresponding to each signal from the total ion current (TIC) chromatogram were averaged, allowing for accurate molecular mass measurements. The exact mass was determined automatically using Q-Tof's LockSpray (Waters, MA, USA). Datadependent MS/MS acquisitions were performed on precursors with charge states of 2, 3 or 4 over a range of 50-2000 m/z, and under a 2 m/z window. A maximum of three ions were selected for MS/MS from a single MS survey. Collision-induced dissociation (CID) MS/MS spectra were obtained using argon as the collision gas at a pressure of 40 psi, and the collision voltage varied between 18 and 90 V, depending on the mass and charge of the precursor. The scan rate was 1 scan/s. All data were processed using the ProteinLynx Global server (version 2.5, Waters). The processing automatically lock mass calibrated the m/z scale of both the MS and the MS/MS data utilizing a lock spray reference ion. The MS/MS data were also charge-state deconvoluted and deisotoped with the maximum entropy algorithm MaxEnt 3 (Waters, MA, USA). Mass spectrometry data analysis Proteins corresponding to the tryptic peptides from peak 3 were identified by correlation of tandem mass spectra and the NCBInr database of proteins (Version 050623), using the MASCOT software (Matrix Science, version 2.1). Settings allowed for one missed cleavage per peptide, and an initial mass tolerance of 0.2 Da was used in all searches. Cysteines were assumed to be carbamidomethylated, and a variable modification of methionine (oxidation) was allowed. Identification was considered positive when at least two peptides matched the protein sequence with a mass accuracy of less than 0.2 Da. Parasites T. cruzi tissue culture trypomastigotes (CL-Brener clone) were obtained from the supernatants of 5 to 6-day-old infected LLC-MK 2 cells maintained in RPMI-1640 medium (Sigma) supplemented with 2% FCS for 5-6 days at 37uC in a humidified 5% CO 2 . Theses trypomastigotes were also used to obtain intracellular amastigotes in macrophage cultures. Ethics statement In this study, we used 5-week-old female CF1 mice as sources of peritoneal macrophages and of muscle sample for ex vivo assays (described below). All animal experimentation protocols received the approval by the Commission to Evaluate the Use of Research Animals (CAUAP, from the Carlos Chagas Filho Biophysics Institute -IBCCF), and by the Ethics Committee for Animal Experimentation (Health Sciences Center, Federal University of Rio de Janeiro -UFRJ) (Protocol no. IBCCF 096/097/106), in agreement with Brazilian federal law (11.794/2008, Decreto nu 6.899/2009). We followed institutional guidelines on animal manipulation, adhering to the ''Principles of Laboratory Animal Care'' (National Society for Medical Research, USA) and the ''Guide for the Care and Use of Laboratory Animals'' (National Academy of Sciences, USA). Parasite cytotoxicity assays Crovirin was purified as described above and stored at 220uC, in 3.6 mg/ml stock solutions prepared in PBS (pH 7.2). All experiments were carried out in triplicates. Stock solutions of Bz (14 mg/ml) and Amp-B (10 mg/ml) were prepared in dimethyl sulfoxide (DMSO), and the final concentration of the solvent never exceeded 0.5%, which is not toxic for parasites and mammalian cells. Ber stock solution (0.188 mg/ml) was prepared in pyrogenfree water. Axenically grown parasite forms were treated with crovirin for up to 72 h in the same culture conditions used for growth (described above). The following crovirin concentrations were used to treat axenic forms: 1.2-4.8 mg/ml (L. amazonensis promastigotes) and 0.6-4.8 mg/ml crovirin (T. brucei rhodesiense BSF and PCF). IC 50 values were calculated based on daily counting of formalin-fixed parasites using a hemocytometer. Positive controls were run in parallel with 4.7 mg/ml Amp-B [51] and 39.8 ng/ml Ber [52], respectively. T. cruzi tissue culture trypomastigotes were treated with crovirin (0.45-4.8 mg/ml) at a density of 1610 6 cells/ml, for 24 h at 37uC (in RPMI media containing 10% FCS). LD 50 (50% trypomastigote lysis) values were determined based on direct counting of formalin-fixed parasites using a hemocytometer. Bz was used as reference drug, in a 3.39 mg/ml concentration [53]. To evaluate the effects of crovirin on T. cruzi and L. amazonensis intracellular amastigotes, peritoneal macrophages from CF1 mice were harvested by washing with RPMI medium (Sigma), and plated in 24-well tissue culture chamber slides, allowing them to adhere to the slides for 24 h at 37uC in 5% CO 2 . Adherent macrophages were infected with tissue culture T. cruzi trypomastigotes (at 37uC) or L. amazonensis metacyclic promastigotes (at 35uC) at a macrophage-to-parasite ratio of 1:10, for 2 h. After this period, non-internalized parasites were removed by washing, cultures were incubated for 24 h in RPMI with 10% FCS, and fresh medium with crovirin (0.45-3.6 mg/ml for T. cruzi, and 0.6-9.6 mg/ml for L. amazonensis) was added daily for 72 h. At different time-points (24,48 and 72 h) cultures were fixed with 4% paraformaldehyde in PBS (pH 7.2) and stained with Giemsa for 15 min. The percentage of infected cells and the number of parasites per 100 cells were determined by light microscopy examination. Positive controls of T. cruzi and L. amazonensis amastigotes infected cells were run in parallel with cultures treated with 0.73 mg/ml Bz [53] and 0.07 mg/ml Amp-B [54], respectively. Mammalian cell cytotoxicity assays LLC-MK 2 cells were maintained in RPMI medium supplemented with 10% FCS. Prior to treatment with crovirin, cells were seeded in 24-well plates containing glass coverslips and incubated in RPMI medium supplemented with 10% FCS for 24 h at 37uC. Cells were then treated with 4.8, 10 and 20 mg/ml crovirin at 37uC for 72 h. LC 50 values (concentrations that reduces by 50% the cellular viability) for crovirin were calculated from daily counts of the number of viable cells, using trypan blue as an exclusion dye. At least 500 cells were examined per well, on a Zeiss Axiovert light microscope (Oberkochen, Germany). In addition, mouse peritoneal macrophages were seeded on 96well plates, incubated in RPMI medium with 10% FCS for 24 h at 37uC and treated with 4.8, 10 and 20 mg/ml crovirin at 37uC, for 72 h. After this period, cells were washed with PBS (pH 7.2), and the wells were filled with RPMI medium without phenol red containing 10 mM glucose and 20 ml of a solution of 2 mg/ml MTS (3-(4,5dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium salt) and 0.92 mg/ml PMS (phenazine methosulfate), prepared according to the manufacturer's instructions (Promega, Madison, WI, USA). Following 3 h of incubation at 37uC, formation of a soluble formazan product by viable cells was measured using a plate reader, by absorbance at 490 nm. All cytotoxicity experiments were carried out in triplicates. Ex vivo mytotoxicity assay The myotoxicity of crovirin was studied ex vivo using a muscle creatin kinase (CK) activity assay [55]. The analysis consisted of monitoring the rate of CK release from isolated mouse extensor digitorum longus (EDL) muscle bathed in a solution containing crovirin (10 mg/ml). Adult male and female Swiss mice (25.065.0 g) were anesthetized with ethyl ether and killed by cervical dislocation. EDL muscles were collected, freed from fat and tendons, dried and weighed. Muscle samples were then homogenized in 2 ml saline/0.1% albumin and their CK content was determined using a commercial diagnostic kit (Bioclin, Brazil). Four EDL muscles were mounted vertically on a cylindrical chamber and superfused continuously with Ringer's solution equilibrated with 95% O 2 /5% CO 2 . At 30 to 90-min intervals, the perfusing solution was collected and replaced with fresh solution. The collected EDL samples were used for the measurement of CK activity as described above. Muscles were weighed at the end of the experiment (2 h later). Enzyme activity is reported as international units corrected for muscle mass. Statistical analysis Mean value comparisons between control and treated groups were performed using the Kruskal-Wallis test in the BioEstat 2.0 program for Windows. Differences with p#0.05 were considered statistically significant. Purification of crovirin, a CRISP from the snake venom of C. viridis viridis In a previous study, we showed that the Cvv venom had antiparasitic activity against T. cruzi [46]. Preliminary analysis of Cvv venom fractions by reverse-phase chromatography (not shown) indicated that the activity eluted with fractions containing peak 3 of the chromatographic profile (Fig. 1A). Thus, we analyzed the main chromatographic fraction corresponding to peak 3 by SDS-PAGE and MALDI-TOF mass spectrometry (Fig. 1B-C). SDS-PAGE analysis of peak 3 showed a single polypeptide, with a relative molecular mass of 24 kDa (Fig. 1B) and 28 kDa (data not shown), under reducing and non-reducing conditions, respectively. We will refer to this protein henceforth as crovirin. MALDI-TOF analysis of the intact protein showed a molecular mass of 24,893.64 Da (Fig. 1C). The peaks of 12,424.36 and 12,477.62 Da in the MS profile correspond to doubly-charged (z = 2) cationic forms of the protein. The amino acid sequence of tryptic crovirin peptides (produced by Nano LC-MS/MS mass spectrometry analysis) is nearly identical to a partial sequence of a Cvv CRISP (GenBank gi:190195319) (Fig. 2). The MS/MS-derived sequences are also nearly identical to those of a CRISP protein from Calloselasma rhodostoma (GenBank gi:190195317) and have high degree of sequence similarity to several other snake venom CRISPs, including ablomin (Fig. 2). The MS/MS spectrum of the fragmented peptide ions was matched by MASCOT displayed a coverage of 48% of identical peptides, with a p$355 indicating extensive homology to the CRISP from C. rhodostoma. The MS results strongly suggested that a CRISP from Cvv snake venom had been purified, and corresponded to crovirin. Crovirin has significant anti-parasitic activity against infective forms of trypanosomatids, with minimal toxicity First of all, we investigated crovirin citotoxicity over mammalian host cells before proceeding with our analysis of the anti-parasitic activity of this venom protein. LLC-MK 2 cells were treated with crovirin for 72 h and examined for viability using a trypan blue exclusion assay (Fig. 3A). None of the tested crovirin concentrations (4.8, 10 or 20 mg/ml) were capable of inducing significant loss of cell viability, even after 72 h of treatment. In addition, we tested the activity of crovirin against murine peritoneal macrophages to investigate its cytotoxicity towards primary host cells. Treated cells were examined using an MTS assay, and no significant toxicity (p# 0.05) was observed in any treatment conditions (Fig. 3B). Creatine kinase (CK) activity was measured before and two hours after extensor digitorum longus (EDL) muscle exposure to 10 mg/ ml crovirin. We did not observed significant CK release from treated EDL muscles compared to control (saline) after 2 hours of incubation with crovirin, indicating that this protein did not generated appreciable myotoxicity at the concentration tested. After establishing that crovirin had only minimal cytotoxic effects towards mammalian cells at concentrations of up to 20 mg/ ml, we tested the anti-parasitic activity of purified crovirin against relevant developmental forms of three different species of pathogenic trypanosomatid parasites, namely L. amazonensis, T. cruzi and T. brucei rhodesiense. We tested crovirin activity against the two infective T. cruzi forms, trypomastigotes and amastigotes. Trypomastigote forms do not multiply and do not remain viable after several days in culture media at 37uC. Therefore, the effect of crovirin towards T. cruzi trypomastigotes was evaluated as the ability of the protein to lyse cells after 24 h of treatment (Fig. 4A). The calculated LD 50 of crovirin for trypomastigotes was 1.1060.13 mg/ml ( Table 1). This concentration displayed the second higher selectivity index (18.2) ( Table 1) among all crovirin treatments. The treatment with 3.39 mg/ml Bz exhibited a 65.8% of parasites lysis at same conditions. T. cruzi amastigotes multiply in the intracellular environment. Crovirin inhibited the growth of amastigotes inside peritoneal macrophages in a dose-dependent manner (Fig. 4B), with an IC 50 of 1.8460.53 mg/ml when cells were treated with crovirin for 72 h ( Table 1). Crovirin presented a discret superior trypanocidal activity against the intracellular forms as compared with Bz (Fig. 4B). Crovirin activity was also tested against infective promastigote and amastigote forms of L. amazonensis, one of the species responsible for CL. None of the crovirin concentrations tested inhibited significantly the proliferation of L. amazonensis promastigotes in axenic media, unlike Amp-B treatment, which resulted in a reduction of a little over 80% in the number of parasites after 72 h of treatment. In contrast, crovirin inhibited the proliferation of intracellular amastigotes of L. amazonensis in a concentrationdependent manner (Fig. 4C-D). The effect of crovirin on amastigote proliferation was evident as early as 24 h after the start of treatment, and the IC 50 for crovirin after 72 h of treatment was 1.2160.89 mg/ml ( Table 1). After 48 h incubation, the IC 50 of 1.05 mg/ml also resulted in the highest selectivity index (19.1), being less toxic treatment to mammalian host cells. However, no tested concentration of crovirin had superior leishmanicidal activity against amastigotes forms as compared with Amp-B (Fig. 4D). Both developmental forms of T. brucei rhodesiense tested here (PCF and BSF) were sensitive to crovirin treatment. A different profile of growth inhibition in the presence of crovirin was observed for PCF (Fig. 4E) and BSF (Fig. 4F) (Fig. 4E-F). Discussion There is an urgent need for the development of novel compounds for the treatment of trypanosomatid-borne diseases, currently treated with 'dated' chemotherapeutic agents with high toxicity and limited efficacy, partly due to the emergence of drug resistance. Animal venoms and toxins, including snake venoms, can provide compounds directly useful as drugs, or with potential as drug leads for the synthesis of novel therapeutic agents [22]. Previously, our group showed that Cvv crude venom displayed anti-parasitic activity against different T. cruzi developmental forms [46]. We have now extended this research with the purification of crovirin, a CRISP from Cvv venom with promising activity against key infective stages of the life cycle of T. cruzi, T. brucei rhodesiense and L. amazonensis. Furthermore, we show that crovirin has low toxicity towards host cells and mouse muscle, in agreement with the low or absent toxicity reported for most CRISPs proteins [35,[44][45]. CRISPs proteins are often given names that refer to the organism from which they were isolated. The first CRISP described in reptiles was isolated from the skin secretion of the lizard Heloderma horridum, and was named helodermin [56]. Examples of proteins isolated from snake venoms are patagonin, isolated from Philodryas patagonensis [44], latisemin, isolated from sea snake Laticauda semifasciata, tigrin isolated from Rhabdophis tigrinus tigrinus [41], and ablomin, isolated from Gloydius blomhoffi [41]. CRISPs sequences have also been identified in transcriptome analysis of venom glands [57][58] or are deposited at databanks but were not purified or studied. A partial CRISP sequence from Crotalus viridis viridis (GenBank gi:190195319) likely corresponding to central and C-terminal regions of crovirin was identified by transcriptome analysis of venom gland tissue. However, this is the first report on the purification and study of crovirin. One of the most important findings of the present study was the activity of crovirin against the intracellular proliferation of trypanosomatids. Amastigotes are key developmental forms during the development and maintenance of infections by Leishmania and T. cruzi, representing the replicative intracellular stages of these protozoan parasites. Substantial inhibition of both T. cruzi and L. amazonensis intracellular amastigote proliferation was observed at crovirin concentrations significantly lower than those required to cause damage to host cells, including mouse EDL muscles. These results are particularly important because the currently available drugs to treat leishmaniasis and Chagas' disease are known to have lower anti-amastigote activity [1,6]. The effects of crovirin over both the procyclic and the bloodstream form of T. brucei rhodesiense are also encouraging, suggesting that crovirin might be useful in the development of new anti-HAT chemotherapeutics. In conclusion, our results demonstrate that crovirin has promising trypanocidal and leishmanicidal effects, and represents a potential avenue for drug development against leishmaniasis, Chagas' disease and HAT, since its antiparasitic effects are matched by low toxicity to host cells and muscles. Further studies are now required to extend our knowledge on the potential use of crovirin as an alternative compound to improve the effectiveness of treatment of trypanosomatid-borne neglected diseases.
6,165.6
2014-10-01T00:00:00.000
[ "Biology" ]
Fine-scale water mass variability inside a narrow submarine canyon (the Besòs Canyon) in the NW Mediterranean Sea ; Variabilidad de las masas de agua a pequeña escala en un cañón submarino (cañón del Besòs) en el NO del mar Mediterráneo In this work we report short-term measurements of the thermohaline structure and velocity field inside a narrow submarine canyon by means of a yo-yo–like profiler. An Aqualog profiler was deployed inside the Besòs Canyon in the northwestern Mediterranean continental margin, providing a unique data set on the vertical evolution of water column characteristics with unprecedented fine-scale spatial and temporal resolution. The observations reported here show a very dynamic transient short-term response with a complex vertical structure not observed previously in any submarine canyon of this region. The vertical distribution of water masses was characteristic of the western Mediterranean basin with Atlantic waters (AW) at the surface, Western Intermediate waters (WIW) in the middle and Levantine Intermediate (LIW) waters below. Turner angle and empirical orthogonal functions show that double-diffusive and isopycnal mixing are the main dominant processes at small scales. The interfaces of the three layers exhibit highly vertical excursions in relatively short times. At the surface, deepening of AW was observed, associated with flow intensification events. Deeper in the water column, within the submarine canyon confinement, the WIW-LIW interface uplifts about 100-150 m. These motions are associated with relatively upand down-canyon–enhanced current events (up to 15-20 cm s–1 at 500 and 800 m depths) along the canyon axis. The time scales of the vertical variability were concentrated in a broad band around the semi-diurnal and local inertial frequencies within the WIW and LIW layers. INTRODUCTION The hydrodynamics of coastal water masses have different impacts on oceanographic processes occurring on continental margins (Huthnance 1995).Particularly in continental shelf and slope areas, shelf-edge physical processes and interactions between flow dynamics and bathymetry have a strong influence on upwelling/downwelling mechanisms and/or cross-margin water and particulate matter exchanges (Hickey 1997, Spurgin andAllen 2014).Submarine canyons can modify the water mass behaviour and enhance cross-margin particle fluxes, and can consequently have a major impact on sedimentary and biological processes (Gili et al. 1998, Palanques et al. 2005, Allen and Durrieu de Madron 2009, Puig et al. 2014).The hydrodynamics in submarine canyons depends upon several forcing conditions, such as general circulation, tidal regime, bottom morphology and atmospheric patterns.However, forcing conditions differ among canyons and can give different responses.Therefore, a detailed monitoring of oceanographic features that may be present in canyon hydrodynamics is needed to properly understand canyon water mass behaviour. In this work, we will focus on the detailed hydrodynamics of a submarine canyon (the Besòs Canyon) that is deeply incised in the continental slope of the NW Mediterranean continental margin (Canals et al. 2013).The thermohaline structure and dynamics of water masses in this particular area is composed of a three-layer system (Hopkins 1978, Salat and Cruzado 1981, Salat et al. 2002).In the first layer, from the surface to 150-300 m, modified Atlantic Water (AW) is generally found.This water mass comes from the Gibraltar Strait and is transformed and modified as it spreads and circulates cyclonically around the western Mediterranean basin.The second layer is formed by Western Intermediate Water (WIW) (Lacombe and Tchernia 1972, Salat and Font 1987, Pinot and Ganachaud 1999) and is located between 300-600 m.The WIW forms by winter convection in the region of the Gulf of Lions and shows potential temperatures of 12.51°C to 12.81°C and salinities of 38.1 to 38.3, respectively.Below, a third layer between 600 and 800 m is occupied by the Levantine Intermediate Water (LIW), with potential temperatures of 13.0°C to 13.41°C and salinities of 38.48-38.54, respectively (Ovchinnikov et al. 1976, Font 1987, Millot 1999). The oceanographic conditions at the surface are characterized by the presence of a quasi-permanent frontal current: the Liguro-Provençal-Catalan Current or Northern Current along the shelf-slope (e.g.Font et al. 1988, Masó and Tintoré 1991, Pinot et al. 1995, Millot 1999, Pinot et al. 2002).This current is associated with a baroclinic front, which separates fresh coastal waters, mainly from the Rhône and Ebro Rivers, from saltier open sea water in the deeper areas of the basin (Font et al. 1988).Analysis of altimetry, infrared satellite images and in situ observations has revealed a significant annual and seasonal variability of this frontal current (LaViolette et al 1990, López García et al. 1994, Font et al 1995, Mason and Pascual 2013) with instabilities, meanders and mesoscale eddies and filaments (e.g.Tintoré et al. 1990, García et al. 1994, Pascual et al. 2002).A major source of this frontal current variability is their interaction with bottom morphology (Arnau 2000, Pascual et al. 2004, Rubio et al. 2005, 2009).The tendency of the current to follow isobaths along the shelf break causes an increase in variability when it interacts with the submarine canyons.This has been documented in several sites: the Cap de Creus, Palamós and Blanes Canyons (e.g.Masó et al. 1990, Palanques et al. 2005, Flexas et al. 2008). Detailed measurements inside these submarine canyons have shown a complex structure of currents and thermohaline structure variability, as the flow adjusts to the canyon shape (e.g.Puig et al. 2000, Palanques et al. 2005, Flexas et al. 2008).These measurements were obtained mainly through an extensive deployment of moorings, accompanied by hydrographic sampling during deployment, maintenance and recovery periods.However, the tracking of water mass time evolution was not possible in previous measurements, except when additional sensors were installed with the mooring deployment.With this purpose in mind, in 2012 a continuous monitoring of the full water column was designed with the help of an Aqualog profiler.The objective was to explore the temporal and vertical characteristics of the water masses inside the narrow Besòs Canyon, complementing the previous measurements made in other wide canyons of this region.This paper reports the observations carried out during this experiment, focusing on the temporal evolution of the water column properties and particularly fine-scale affects on the water mass variability at the canyon head.In the following sections we first describe the methods used to process the data, and then present the main results and discussion. MATERIALS AND METHODS The Besòs Canyon is located on the northwestern Mediterranean continental margin, ~20 km offshore of Barcelona (2.52°E, 41.31°N, Fig. 1).The canyon is relatively narrow, with a mean width of around 5 km and steep sidewalls, and its head barely incises the shelf at the ~100 m isobath.Compared with other submarine canyons in the area, it has a very rectilinear and uniform signature almost perpendicular to the southwestward direction of the shelf.The data analysed here were obtained in an experiment consisting in deploying, close to the canyon head at a depth of 808 m, a mooring line equipped with an Aqualog profiling carrier (Carlson et al. 2013, Ostrovskii et al. 2013), a device that goes up and down along the mooring line carrying several probes (Fig. 2).For this experiment, the Aqualog was equipped with a SBE 52-MP CTD probe, a Nortek Aquadopp acoustic current meter and a Seapoint turbidity sensor.The mooring was deployed on 23 March 2012 and recovered on 22 May 2012.Unfortunately, due to a technical failure with the internal memory card, the Aqualog profiler stopped after 11 days of operation, providing data only until 3 April. The mooring arrangement was designed to scan the vertical range of 62-792 m with six up and down casts per day.The upper shallow range varied slightly, finally stabilizing at 75 dbar, and the maximum attained depth was 801 dbar.The CTD was configured to sample the water column at 0.2 m resolution, whereas the Doppler current meter was set up to sample at 1 m resolution.To proceed with a homogeneous methodology for all the parameters, we first pre-processed the full set of CTD profiles to place temperature and salinity values at the same level as velocity measurements.Then, all the data were cut at the upper part at 78 dbar to have the longest possible time series of the same upper depth profiles.The Aqualog moves along the line at a relative constant speed of 0.17-0.18m s -1 , taking 2 hours to complete a full up-down cycle; then it sleeps at the bottom until the next up-down cycle, which starts again after 2 hours.The sampled profiles were interpolated to obtain a regular spatial-temporal sampling, which gives a matrix for the whole sampling period composed of 131 profiles of 723 vertical points (from 23 March to 3 April and from 78 dbar to 801 dbar).Finally, the velocity field was rotated according to the canyon axis orientation to separate along-and across-canyon velocity components.A rotation of 40° anticlockwise was applied to the velocity field components. To identify the water mass interfaces we will consider the typical water mass temperature and salinity values presented in the Introduction.This criterion will allow us to identify the interface between AW-WIW and WIW-LIW.Double-diffusive mixing is one of the principal small-scale mixing processes that adjusts the density field, diminishing the existing salt and heat excess in adjacent water masses.In order to explore the intensity and presence of small-scale mixing processes, we computed the Turner angle, Tu (Ruddick 1983), which describes the likelihood of contacting layers in a stratified water column developing double-diffusive mixing.Tu can be defined as a polar angle in the ( αT Z , βS Z ) plane measured relative to the αT Z = βS Z >0 line (Radko 2013) where the first of the two arguments of the arctangent function is the "y"-argument and the second one the "x"-argument.The Turner angle is quoted in degrees of rotation in such a way that angles between 45° and 90° represent the "salt-finger" regime of double-diffusive convection, with the strongest activity near 90°.Turner angles between -45° and -90° represent the "diffusive" regime of double-diffusive convection, with the strongest activity near -90°.Turner angles between -45° and 45° represent regions where the stratification is stably stratified in both temperature and salinity fields and Turner angles greater than 90° or less than -90°characterize a statically unstable water column (IOC 2010). Finally, to analyse whether the mixing was diapycnal or isopycnal, we used the (βS', αT') plane, where αT' and βS' are the temperature and salinity anomalies normalized by the thermal expansion (α) and salinity contraction (β) coefficients (Zhurbas et al. 1987).It was shown by Pingree (1972) that for fine structure anomalies resulting from isopycnal advection, αT'= βS', and for fine structure inhomogeneities resulting from vertical mixing, T'/S' = T Z /S Z (where T Z and S Z are the mean vertical gradients in the investigated layer).The work of Pingree can be used to relate the complementary views of Turner angle and Zhurbas' analysis. After calculating Tu to determine the spatial scales at which small-scale mixing can occur, a spatialtemporal decomposition through empirical orthogonal functions (EOFs) of the anomaly of temperature and salinity was performed to see the main modes of variability in the sampled data.For each profile we removed the mean and normalized the anomalies with their standard deviation.EOFs were then computed using a singular value decomposition algorithm of the covariance matrix, retaining modes with associated eigenvectors significantly different from zero (Navarra and Simoncini 2010). Thermohaline structure Figure 3 shows the thermohaline properties of the full set of profiles.The three characteristic Mediterranean Sea water masses can be identified both in the θ-S diagram (Fig. 3A) and in the water column thermohaline distribution and variability (Fig. 3B-D).The upper 100-to 150-metre layer is occupied by a relatively warm and less saline surface water mass corresponding to modified AW.In the depth range between 150 and 500-600 m, a clear signal of WIW was found.As previously mentioned, this water is formed by winter convection of cold surface water and is characterized by relatively low temperatures and salinity.Below the WIW between 500-600 m and down to at least 800 m water depth, the warmer and salty LIW was observed. During the 11 days of sampling, the boundary between modified AW and WIW varied from ~150 to ~300 m water depth, showing a 4-to 6-day oscillatory pattern, which was mostly noticed in the thermal structure.The boundary between WIW and LIW oscillated at much higher frequencies, with vertical isotherm and isohaline fluctuations of 100-150 m occurring within the semi-diurnal and local inertial band (18 h) and more evident at the beginning of the deployment (Fig. 3). Hydrodynamic variability The temporal evolution of the velocity profiles associated with these three water masses is shown in Figure 4.The velocity fields show very different behaviour between the upper and the lower layers and between the along-canyon (Fig. 4A) and the across-canyon (Fig. 4B) components.There is a predominance of negative cross-canyon currents (i.e.towards the SW) in the upper levels, with almost no reverse events during most of the measurement period.This was expected according to the general along-slope behaviour of the frontal current and because the upper levels are unaffected by the canyon morphology.On the other hand, at deeper layers (below 300 m water depth) the currents appeared to be affected by the canyon rims and relatively intense velocities polarized along the canyon axis were observed, with alternating periods of up-and downcanyon flows.Some events around 26 and 28 March were also associated with comparable cross-canyon components, but in the opposite direction to the upper layer, especially for the event of 28 March.An event which is particularly noteworthy appeared between 24 and 25 March.It shows a barotropic up-canyon response over the first 600 m in the down-canyon direction coinciding with the WIW-LIW interface.Then, the AW-WIW deepens and the flow in the WIW layer changes from up-to down-canyon direction simultaneously with the flow changes of the LIW layer in the opposite sign, within the three-layer structure.This response is not so similar in other events (28 March and 2 April), although an intensification of part of the WIW layer (300-500) is seen in the up-canyon direction when the flow is intensified at the surface. Another interesting observation is that up-and down-canyon flows are shorter and appear to be at higher frequency than the flows in the upper layer.In general, it can be appreciated that LIW velocities vary at a higher frequency in the along-canyon component than surface layers and cross canyon components, in agreement with the fluctuations of the isotherms and isohalines (Figs 3 and 4).As stated above, deeper in the water column and within the submarine canyon confinement, the WIW-LIW interface exhibits excursions of about 100-150 m.These motions are associated with relatively up-and down-canyon-enhanced current events (up to 15-20 cm s -1 at 450-500 and 800 m depths) along the canyon axis.To explore the nature of these oscillations, a spectrogram of the along-canyon component was computed.Figure 5A shows the spectrogram of the alongcanyon velocity component for all depths.It shows a clear dependency of the frequency with the vertical coordinate.It can be appreciated in Figures 5B and 5C that there are typical frequencies at selected depths (450 m and 800 m).The confidence levels in Figures 5B and 5C indicate that the analysis is not very robust from a statistical point of view, which can be understood if one takes into account the excess of degrees of freedom in the time series (due to its shortness).Moreover, although the broad peaks indicate an absence of a clear characteristic process together with a lack of resolution, they are centred around some characteristic bands.Along-canyon current oscillations within the WIW layer show the highest spectral density around the local inertial period (18 h) and the semi-diurnal tidal component, although lower frequencies around 30 h also coexist (Fig. 5B).Within the LIW and close to the seafloor, the spectrogram shows a much broader peak, which goes from diurnal to a maximum spectral density around 50 h (Fig. 5C).The cross-canyon axis spectral component near the surface (not shown) has characteristic periods close to the inertial oscillations. Small-scale mixing The three-layer structure identified over time creates favourable conditions for small-scale mixing, which homogenizes the thermohaline contrasts in the contacting layers, offsetting excesses/deficiencies of heat and salt.The double-diffusion process is observed in this case and we use Tu to characterize the favourable conditions for thermal diffusion and salt-fingering. The time series of Tu profiles are presented in Figure 6.It can be observed that the small-scale mixing conditions are changing, with a preference for saltfingering near the interface between surface AW and WIW (45º<Tu<90º) and thermal diffusion in the interface between LIW and WIW . The isopycnal or diapycnal nature of small-scale mixing can be studied through the analysis of the observed fine structure thermohaline anomalies.As explained in the previous section, once the anomalies are calculated, to characterize the observed variability of water masses, EOFs were obtained for the anomalies of the normalized salinity and temperature profiles.The first mode of salinity and temperature represents 13% of the total anomaly variance, the second mode 9.7% and the third mode 7.1%.This means that the most representative EOFs accommodate around 30% of the total anomaly variance (modes not shown).Although the variance represented for these modes is not very high compared with the total variance, the separation of the modes allows us to analyse the different scales. Using these scales we can apply the methodology of Zhurbas et al. (1987) to assess the role of fine-scale mixing processes in the AW-WIW and WIW-LIW interfaces.In our case, the second EOF mode effectively represents the three-layer structure in the anomaly part of the profiles.To assess the role of the AW-WIW and WIW-LIW interfaces in the small-scale mixing processes, we represent the second EOF mode using a βS', αT'-plane representation (Fig. 7).We can observe how this mode is aligned with the bisector, which indicates an isopycnal mode of mixing.Thus, this analysis reveals a predominance of stable-diffusive and isopycnal mixing over diapycnal or unstable mixing. DISCUSSION Progress in scientific and technological aspects during the past years has revealed many natural phenomena inside submarine canyons (Xu 2011).High-resolution sampling in both time and space is needed to resolve and to analyse the scales of the processes inside canyons.The dynamics of water circulation within the submarine canyons incised on the NW Mediterranean continental margin has been previously studied using moored current meters in various research projects mainly devoted to the quantification of sedimentary fluxes.These include observations at the Grand-Rhône Canyon (Durrieu de Madron 1994), the Foix Canyon (Puig et al. 2000), the Palamós Canyon (Palanques et al. 2005, Martín et al. 2007), the Cap de Creus Canyon (Puig et al. 2008, Martín et al. 2013) and the Blanes Canyon (Flexas et al. 2008, Zúñiga et al. 2009, López-Fernandez et al. 2013).However, these current meter time series, which were generally accompanied by temperature, salinity and turbidity measurements, provided very localized and partial information on the hydrodynamics of the various water masses interacting with the canyon.Even in cases in which the horizontal spatial scale was quite well addressed (by deploying several moorings along the canyon axis and on its flanks), the variability in the vertical spatial scale (throughout the water column) was poorly sampled.This was because few current meters were installed on each mooring array, and in most cases only near-bottom measurements were reported. To provide the missing vertical spatial scale information, in this work we have presented the data provided by an Aqualog moored profiling carrier deployed at the head of the Besòs Canyon (NW Mediterranean).The experiment was relatively short and does not allow us to make generalizations on all the observed characteristics relative to the interaction of the canyon and the regional flow configuration.For this reason we have focused on aspects related to the smaller scales associated with the variability of the water masses inside the canyon. The vertical positions of the limits of the three water masses found inside the canyon show intense, quasi-oscillatory vertical displacements.The limit between the modified AW and the WIW shows a 4-to 6-day oscillation (Fig. 3), which appears sometimes related to intensifications of the geostrophic current in the along-slope (towards SW) direction (Fig. 4B) associated with a deepening of the AW-WIW interface.Not all these events are similar with respect to the vertical structure.One appears to be structured in a three-layer response, while the others are more confined to the upper and WIW layers. The main surface currents observed are in the across-canyon direction consistent with the alongshelf/slope frontal current that characterized the regional circulation (Font et al. 1988).However, the predominance of currents along the canyon axis is a consequence of the narrowness of the canyon topography, as was observed in the similarly narrow Foix Canyon (Puig et al. 2000).The Besòs Canyon has a characteristic width of about 5 km, which is smaller than the typical Rossby radius of deformation in the area (around 12 km), preventing the adjustment of the frontal flow to the canyon morphology, as has been described in the literature (e.g.Klinck 1996, Jordi et al. 2005).In wider canyons, such as the Blanes or Palamós Canyons located several kilometres upstream, the main currents across the canyon axis adjust to the shape of the canyon walls (Palanques et al. 2005).In wider canyons, numerical simulations and observational measurements show that the flow adjustment for similar configurations (right-bounded flow) produces a downwelling flow in the upper layers on the upstream wall and an upwelling flow on the downstream wall (Klinck 1996, Jordi 2005). The occurrence of flow intensification and deepening of the upper layer (2-4 days) contrasts with the shorter variability of the WIW-LIW interface.Without additional information it is difficult to decide which process is responsible for such variability.Meandering of the frontal current, eddies around the canyon, low-frequency meteorological forcing and propagation of topographic waves along the shelf have been proposed in this area (e.g.Puig et al. 2000, Palanques et al. 2005).Similar low-frequency fluctuations have also been described elsewhere in the northwestern Mediterranean, and have been identified as topographic waves (Crépon et al. 1982, Sammari et al. 1995, Durrieu de Madron et al. 1999).At smaller scales, the analysis of the results indicate that this variability of the AW-WIW and WIW-LIW interface is characterized by Tu angles compatible with salt-fingering in the AW-WIW interface and thermal diffusion in the WIW-LIW interface.After the split of the first three anomaly modes, we also found that isopycnal mixing processes can occur.Although the relative contribution of the selected EOF second mode to the total variance is not so high, it shows a clear signature of three layers and then allows us to use a (βS', αT')-plane representation to find that isopycnal processes are dominant in it.Further investigations should be conducted to understand properly the scales implied in isopycnal processes at the studied scales, which are the ones that our analysis reveals as the most influential.However, the coupling along modes analysed (shown by the variance spread along the EOFs) indicates a join contribution of these processes to the measured phenomena, without a clear dominance of a single, driven process. To summarize the experiments presented here, the observations of water masses and currents inside the Besòs Canyon have revealed an unexpected rich internal structure that could not be seen with traditional surveys with moorings or ship cruises, as has been done in nearby canyons in the same region.The interaction of the shelf frontal current and a narrow canyon is quite complex in terms of the structure and variability of water masses and currents.Major vertical excursions at short time scales were observed, associated with enhanced along-and across-canyon events near the semi-diurnal tidal, inertial and longer temporal scales.These events sometimes appeared in the velocity field, having a spotty character, which may also be the signature of quasi-inertial oscillations generated by the adjustment process of the frontal current over the canyon or by the meandering of the frontal current, combined with signal amplifications towards the canyon head region.The intermittent nature of these events, with different vertical responses, and the lack of a dominant mode of variability may be due to the mooring location close to the head of the canyon.The constrained shape of the Besòs Canyon head (narrow and steep) probably favours a response, which is the contribution of many short-term events of different natures that spread the variance among many modes.The similar absence of a clear spectral signature in near-bottom currents was also noticed at the head of the Foix Canyon (Puig et al. 2000), where current reversals were more frequent than at deeper canyon sites that have shorter along-canyon axis displacements.Finally, the small-scale mixing processes at the second EOF mode scale shows an isopycnal nature.However, further analysis should be carried out to quantify the role of each process (isopycnal or diffusive mixing) in small-scale mixing.Unfor-tunately, the short duration of the deployment and the experimental configuration with only one mooring did not allow us to provide robust and statistically significant analysis, and further intensive observations should be carried out in the near future. Fig. 1 . Fig. 1. -Bathymetric chart of the northwestern Mediterranean continental margin showing the location of the Besòs Canyon.The red dot indicates the site where the Aqualog profiler was deployed (2.52°E, 41.31°N). Fig. 2 . Fig. 2. -Scheme of the mooring line.The inset shows the Aqualog profiler, which moves along the cable of the mooring line. of thermal expansion and salinity contraction, respectively, of the mean profile. Fig Fig. 3. -A, potential temperature-salinity diagram of the whole set of profiles acquired by the Aqualog.Black lines indicate density levels in sigma-t units.The colour scale indicates the pressure level in db.Panels B, C and D show the time evolution of the potential temperature, salinity and density, respectively.The black line in each B, C and D plot shows the separation between WIW and LIW. Fig. 4 . Fig. 4. -Time evolution of velocity (m s -1 ) profiles: A, along-canyon axis component and B, cross-canyon axis component (positive values are towards NE and up-canyon). Fig. 5 . Fig. 5. -A, raw spectrogram of the along-canyon velocity component.Spectrograms at 450 m (B) and 780 m (C) smoothed with a Daniell's window of six terms.Dashed lines represent 95% confident levels. Fig. 7 . Fig. 7. -Second EOF mode of the anomaly of temperature and salinity fields in the range 500-700 m depth (the range of depths at which the WIW-LIW interface is located over time).
6,048
2016-09-30T00:00:00.000
[ "Environmental Science", "Geology" ]
Deep Learning-Based Construction and Processing of Multimodal Corpus for IoT Devices in Mobile Edge Computing Dialogue sentiment analysis is a hot topic in the field of artificial intelligence in recent years, in which the construction of multimodal corpus is the key part of dialogue sentiment analysis. With the rapid development of the Internet of Things (IoT), it provides a new means to collect the multiparty dialogues to construct a multimodal corpus. The rapid development of Mobile Edge Computing (MEC) provides a new platform for the construction of multimodal corpus. In this paper, we construct a multimodal corpus on MEC servers to make full use of the storage space distributed at the edge of the network according to the procedure of constructing a multimodal corpus that we propose. At the same time, we build a deep learning model (sentiment analysis model) and use the constructed corpus to train the deep learning model for sentiment on MEC servers to make full use of the computing power distributed at the edge of the network. We carry out experiments based on real-world dataset collected by IoT devices, and the results validate the effectiveness of our sentiment analysis model. Introduction With the rapid development of the Internet of ings (IoT), the number of connected things is continuously increasing as well as their interactions [1][2][3][4]. At the same time, with the wide-ranging application of communication tools and several models, the IoT has been employed in the military services, medical sector, mobile communications, industrial fields, and so on [5][6][7]. According to the "Global IoT Device Data Report" released by International Data Corporation (IDC), it is predicted that by 2025, the number of global IoT devices will reach 41.6 billion, including all kinds of machines and sensors, smart homes, vehicles, wearable devices, and industrial equipment, and the amount of data generated annually will reach 79.4 ZB. It should be noted that data collected by IoT devices must be processed before they can be used [8]. e traditional centralized network cannot meet the needs of mobile users due to low storage capacity, high energy consumption, low bandwidth, and high latency [9,10]. Mobile Cloud Computing (MCC), as the integration of cloud computing and mobile computing, provides considerable capabilities to mobile devices and provides them with the storage, computation, and energy resources provided by the centralized cloud. In recent years, mobile computing has shifted from centralized Mobile Cloud Computing to Mobile Edge Computing (MEC), driven by the vision of the IoT and 5G communications [11][12][13][14]. e primary function of MEC is to push mobile computing, network control, and storage to the edge of the network (for example, base stations and access points) so that the large amount of computing power and storage space distributed at the edge of the network can generate sufficient capacity to perform computation-intensive and delay-critical tasks on mobile devices [15][16][17][18]. erefore, data collected by IoT devices can be processed by using MEC servers [19]. While emotional analysis has been successful for text, it is a question that has not been fully studied for data, which contain two or three modes of text, audio, and vision simultaneously. e biggest setbacks for studies in this direction are lack of a proper dataset and methodology [20,21]. However, the development of IoT and MEC provides a new solution for the construction of multimodal corpus and training of sentimental analysis model. We can use IoT devices (such as IP cameras) to collect audio and vision data which contain the context information, transmit the collected data to the back-end through the network for processing and storage, and use the collected data to construct a multimodal corpus [22][23][24]. en, extract the text features, audio features, and vision features of the corpus, build a multimodal sentimental analysis model based on the DialogueRNN model, deploy the model to the MEC server to make full use of the advantages of MEC server, and use the extracted text features, audio features, and vision features to train this deep learning model [25][26][27]. Relevant Corpora for Multiparty Dialogue in IoT e construction of the different corpora has similarities in many aspects such as the data used, annotation process, and annotation specification, which has certain reference value for us to use the IoT devices to collect data for construction of corpus. At the same time, the construction of the different corpora can provide some guidance and reference for the data collected by IoT devices from daily life or from situation comedies. At present, English corpora are considered the most abundant resource. erefore, this paper, starting from English multimodal corpora, introduces the related subtasks and corresponding corpora as well as the latest research work on the corpora through sorting out different multimodal sentimental analysis subtasks. Bimodal Corpora. In 2016, Niu's team introduced a multiview sentiment analysis dataset (MVSA) including a set of image-text pairs with manual annotations collected from Twitter, which is annotated with positive, neutral, and negative sentimental labels [28]. e MVSA dataset consists of two parts. One is MVSA-Single, where each sample is annotated by an annotator and contains only one sentimental label, with a total of 4869 image-text pairs. e other part is MVSA-Multiple, where each sample is annotated by three annotators and contains three sentimental label, with a total of 19,598 image-text pairs. is dataset can be utilized as valuable benchmark for both single-view and multiview sentiment analysis. [29]. In total, the dataset consisted of 4,784 impromptu dialogues and 5,255 scripted dialogues with an average duration of 4.5 seconds. Each sentence in the dialogue is annotated with a specific emotional label, including anger, happiness, sadness, excitement, and frustration. e dataset also provides continuous attributes: activation, valence, and advantage. ese two types of discrete and continuous emotional descriptors contribute to the complementary understanding of human emotional expression and emotional communication between people. Multimodal Opinionlevel Sentiment Intensity. In 2016, Zadeh's team introduced the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinionlevel Sentiment Intensity dataset (MOSI) [30]. e sentiment intensity of data derived from YouTube videos of film reviews is defined from strongly negative to strongly positive with a linear scale from −3 to +3. e sentimental annotation of this dataset is not the feelings of the viewers but the sentimental tendencies of the speakers in the videos. A total of 93 videos which are randomly collected come from 89 narrators, all of whom express their opinion in English. CMU Multimodal Opinion Sentiment and Emotion Intensity. In 2018, Bagher Zadeh's team introduced CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI), the largest dataset of sentiment analysis and emotion recognition at that time [31]. e data of CMU-MOSEI dataset are derived from the monologue videos of YouTube. is dataset which has both emotion annotation and sentiment annotation contains 23,453 annotated video segments from 1,000 distinct speakers and 250 topics. Multimodal EmotionLines Dataset. In 2018, Soujanya's team proposed the Multimodal EmotionLines Dataset (MELD) to solve the gap that there is no large-scale multimodal multiparty emotional conversational database containing more than two speakers [32]. e MELD dataset is derived from the EmotionLines dataset, which is a textonly dialogue from the classic TV series Friends. is dataset is a multimodal dataset encompassing audio, visual, and textual modalities, which contains about 13,000 utterances from 1,433 dialogues. Each utterance in the snippet of conversation is annotated with one of seven emotional labels, including anger, disgust, sadness, happiness, neutral, surprise, and fear. At the same time, each utterance is annotated with one of the three sentimental labels, including positive, negative, and neutral. CH-SIMS Dataset. In 2020, Yu's team proposed the Chinese single and multimodal sentiment analysis dataset (CH-SIMS) which is a fine-grained annotated Chinese multimodal emotion analysis dataset [33]. CH-SIMS dataset collected 60 original videos from movie clips, TV series, and various performance programs and cut them at the frame level. Finally, 2281 video clips were obtained. e annotator annotates each video clip in four modes: text, audio, silent video, and multimode. Each mode of each video clip is marked by 5 annotators, and the labels are divided into −1 (negative), 0 (neutral), or 1 (positive). DuVideoSenti Dataset. In 2021, Tang's team proposed a multimodal sentiment analysis dataset named baiDu Video Sentiment dataset (DuVideoSenti), which consists of 5,630 videos displayed on Baidu [34]. In this dataset, each video is manually annotated with a sentimental style label which describes the user's real feeling of a video. e sentimental style labels used to describe the visual and sentimental feelings of users after browsing the video in this dataset are listed as follows: hipsterism, fashion, warm and sweet, objective and rationality, daily, old-fashion, cute, vulgar, positive energy, negative energy, and others. Multimodal Sentiment Chat Translation Dataset. In 2022, Liang's team proposed a Multimodal Sentiment Chat Translation Dataset (MSCTD) containing 142,871 English-Chinese utterance pairs in 14,762 bilingual dialogues and 30,370 English-German utterance pairs in 3,079 bilingual dialogues [35]. Each utterance pair, corresponding to the visual context that reflects the current conversational scene, is annotated with one sentiment label (positive/ neutral/negative). Procedure of Multimodal Corpus Construction for IoT in MEC In this section, we introduce a procedure for constructing multimodal corpus based on multiparty dialogues collected by IoT devices to provide an idea and method for researchers to construct multimodal corpus from multiple perspectives. Annotation Scheme. A fundamental requirement of video fragments for constructing multimodal corpus is that the speaker's face and voice must appear in the same video fragment at the same time and it can remain for a certain period of time. In order to get the video fragments containing multiparty dialogues as raw data sources for our multimodal corpus construction as close to life as possible, we can use IoT devices to collect dialogue fragments from daily life or situation comedies as video fragments we need. At the same time, in order to effectively obtain the corresponding video fragments from the multiparty dialogues collected by IoT devices, there are also specific requirements for the collected target fragments: (1) e picture quality of the video should be as clear as possible. (2) e characters in the whole video should be as few as possible, and especially when the speaker speaks, the background characters in a video fragment should be as few as possible. (3) In a video fragment, when the speaker speaks, noise such as background music should be avoided as much as possible. e corresponding video fragments obtained from the dialogue fragments collected by IoT devices that meet the above requirements are conducive to the annotators to determine the sentiment and emotion of the speaker appearing in the video, so as to improve the accuracy of annotation and the accuracy of the extracted features for all modalities including text, audio, and vision. In the procedure of corpus construction, in addition to ensuring the quality of video fragments to be annotated, the annotation scheme is also an indispensable part. As a result, it is necessary to develop an annotation scheme to ensure the smooth construction of multimodal corpus to meet the quality requirement of multimodal corpus and facilitate annotators to annotate. Annotation Unit. e annotation unit of corpus is a speech of a speaker in a dialogue. e principle of consistency should be followed when annotating each video fragment: (1) Only one speaker is allowed to speak in the same video fragment to be annotated and the emotion will not change greatly as appropriate during the speech. (2) e video fragment to be annotated should not be too short or too long by the same speaker. (3) e short sentences of the same speaker with the same emotion should be combined as appropriate, and the longer sentences should be properly segmented according to the principle of semantic integrity and consistency principle. (4) e video fragment to be annotated must meet the requirement that the people appearing on the screen must include the speaker. If the video fragment does not conform to the above principle of consistency, the video fragment will be filtered immediately. In short, the combination and segmentation of annotation units should follow the principle of not combining the speeches of different speakers or scenarios, so that it is convenient for the annotators to annotate. Annotation Template. e annotation template designed in this paper includes the information of season, episode, dialogue, utterance number, speaker, utterance, emotion, and sentiment. If it is a video fragment obtained from the dialogue fragments collected by IoT devices from daily life, the information of season and episode may not be required. If it is a video fragment obtained from the dialogue fragments collected by IoT devices from situation comedies, the information of season indicates the season which a video fragment belongs to and the information of episode indicates the episode which a video fragment belongs to. e information of dialogue indicates the scene which a video fragment belongs to. e information of utterance number Computational Intelligence and Neuroscience 3 indicates the position of a video fragment in one dialogue. e speaker indicates the person who is talking in a video fragment. e information of utterance indicates the content text described by the current speaker. e information of emotion represents the external emotion of the person who is talking in a video fragment. e information of sentiment represents the internal feeling of the person who is talking in a video fragment, which needs to be judged by the context of the dialogue. Classification of Emotion and Sentiment. Sentimental information and emotional information are the focus of the whole annotation scheme. e division of sentimental categories and emotional categories is the cornerstone for the construction of the whole corpus. e standard of classification is to take into account the coverage of sentimental categories and emotional categories in the whole video. In order to reasonably organize the classification of sentiment and the classification of emotion, this paper refers to several multimodal corpora and summarizes the methods of their sentimental classification and emotional classification listed in Table 1. It can be seen from Table 1 that the emotional classification of most multimodal corpora includes anger, disgust, fear, happiness, sadness, and surprise and the sentimental classification of most multimodal corpora includes positive, negative, and neutral. On the one hand, it strives to cover more annotation examples to make each annotation unit have accurate categories as far as possible. On the other hand, it controls the number of emotional and sentimental categories and the inclusion relationship between them to ensure the mutual exclusion of emotional and sentimental categories. Considering the above factors and referring to the six basic emotion types proposed by Ekman [36], this paper divides sentiment into positive, negative, and neutral and divides emotion into anger, disgust, fear, joy, sadness, surprise, and neutral. In real life, people do not necessarily show the corresponding emotions when expressing their internal feelings. Sometimes emotions conflict with people's internal feelings. At this time, it needs to be judged in many aspects to determine the character's real feelings. erefore, when annotating sentimental information, it needs logical reasoning combined with the dialogue-context to accurately annotate the speaker's sentimental information. e annotation of emotional information only needs to be based on the external emotional expression presented by the speaker. Data Preprocessing on MEC Server. Multiparty dialogues collected by IoT devices can be uploaded to MEC servers through network transmission to make full use of the amount of idle storage space distributed at the edge of the network. e procedure of data preprocessing is to use the video editing tool Adobe Premiere Pro on MEC servers to cut the target fragments at the frame level according to the consistency principle specified in the annotation unit section, which will be very time-consuming but accurate enough to obtain the video fragments that meet the requirements of multimodal corpus construction. If the video fragments collected by IoT devices are from daily life, the edited video fragment should be classified according to the scene to which the video fragment belongs. If the video fragments collected by IoT devices are from situation comedies, the edited video fragment should be classified according to the season attached, the episode attached, and the scene to which the video fragment belongs. Annotation Specification. Annotation specification includes two parts: annotation principle and annotation process, which are used to control the whole procedure of multimodal corpus construction. Annotation Principle. When annotating sentiments and emotions, the annotated sentiments and emotions are the sentiments and emotions of the characters speaking in a video fragment, not the sentiments and emotions of the annotator watching this video fragment. At the same time, in a scene, the speaker's sentiment is easily affected by other characters in the scene, and the speaker's sentiment usually has a certain continuity in a scene. e speaker's sentiment in the preceding utterance and the sentiment in the latter utterance are likely to be the same or similar. erefore, when annotating the sentiment of each speaker, the sentiment of the speaker should not be annotated according to Annotation Process and Quality Review. Group the preprocessed data according to the scene and then give each group of data to two persons, respectively, who annotate each group of data on the MEC servers according to the information contained in the annotation template. If the preprocessed data are collected by IoTdevices from daily life, the information that should be included in the annotation template is dialogue, utterance number, speaker, utterance, emotion, and sentiment. If the preprocessed data are collected by IoT devices from situation comedies, the information that should be included in the annotation template is season, episode, dialogue, utterance number, speaker, utterance, emotion, and sentiment. e part with inconsistent sentimental and emotional information in the annotated content is handed over to the third person for judgment and decision. e process of annotation is shown in Figure 1. e consistency of annotation is the key index to evaluate the construction of corpus. We can evaluate the quality of corpus by comparing the consistency of sentimental and emotional annotation between two annotators. Building the Deep Learning Model Deployed on MEC. For our deep learning model, the overall deployment framework is shown in Figure 2. We run our sentimental analysis model (SAM) and other strong algorithm models on a public dataset named IEMOCAP. According to the comparison results shown in Figure 3, we find that our sentimental analysis model is better than other strong algorithm models. Feature Extraction (1) Textual Feature. Because the structure of the previous pretraining model is limited by the unidirectional language model (left-to-right language model pretraining or right-toleft language model pretraining), it also limits the representation ability of the model, so that it can only obtain unidirectional context information. BERT uses MLM for pretraining and uses deep bidirectional transformer components to build the whole model, so it finally generates deep bidirectional language representation that can integrate the left and the right context information, which is the reason why we use pretraining language model BERT to extract textual features [37]. Firstly, after the deep encoding of discourse level data by BERT, the vector at [CLS] position is taken as the feature representation of discourse level. Finally, the dimension of textual features is reduced by full connection to obtain 300-dimensional sentiment features of textual sentiment features. (2) Audio Feature. OpenSMILE which is widely applied in automatic sentiment recognition in affective computing is an open-source toolkit for audio feature extraction and classification of speech and music signals, which is the reason why we use OpenSMILE to extract audio features. Firstly, we used the OpenSMILE tookit to obtain 384 dimensional sentiment features of audio, including prosodic feature and spectral feature, and then the audio features are normalized by the standard normalization (Z-score) method. e dimension of audio features is reduced by full connection to obtain 300dimensional sentiment features of audio. (3) Visual Feature. CNN with 3D convolutional kernels, which outperforms 2D CNNs through the use of large-scale video datasets, is intuitively effective because such 3D convolution can be used to directly extract spatiotemporal features from raw videos [38]. Firstly, the video fragment is segmented into equal frames, and then the face part in each frame is recognized and extracted. 3D CNN combined with multilayer convolution and pooling module is used to obtain the visual features contained in the video fragment. Finally, the dimension of visual sentiment features is reduced by full connection to obtain 300-dimensional sentiment features of audio. Sentiment Encoder. DialogueRNN is a neural network that keeps track of the individual party states throughout the conversation and uses this information for sentimental classification, which is capable of handling multiparty conversation [39]. e DialogueRNN model, which has an effective mechanism to model the context by tracking the individual speaker states throughout the conversation to classify sentiment, is based on the assumption that there are three major aspects relevant to the sentiment in a conversation: the speaker, the context from the preceding utterances, and the sentiment of the preceding utterances. It employs three gated recurrent units (GRUs) to model sentimental context in conversations. We can use the DialogueRNN model as the sentiment encoder, multimodal feature vectors as the input of the sentiment encoder, and sentimental features fused with context features as the output of the sentiment encoder. Deep Learning Model. After extracting the vectors of textual features, audio features, and visual features, we can splice vectors of two or three modes together to obtain the multimodal feature vectors of the current utterance on MEC server. e multimodal feature vectors are used as the input data of the sentiment encoder to obtain the representation of the sentimental feature integrating the context features. Finally, the output sentimental features are directly used as the input data of the full connection layer + softmax layer (sentiment classifier) to obtain the sentimental label of the current utterance. Construction and Analysis of Multimodal Corpus for IoT. For the tens of thousands of dialogue fragments collected by IoT devices such as IP cameras from daily life, which can Computational Intelligence and Neuroscience collect video and audio information at the same time, process the collected video and audio information to a certain extent, and transfer video and audio information to MEC servers over network, we need to select the dialogue fragments according to the specific requirements of the collected target fragments introduced in Section 3.1 to pick out the dialogue fragments that can be used to construct the multimodal corpus, which may take a lot of time and effort. However, we need to recognize the reality that although IoT devices can collect tens of thousands of dialogue fragments, most of these dialogue fragments can only ensure clear image quality but cannot ensure that there are as few characters as possible in the dialogue fragments and also cannot ensure that noise such as background music can be avoided as much as possible when the speaker speaks. ese situations may lead us to pick out fewer useful fragments from the collected dialogue fragments that can be used to build a multimodal corpus. Even if these situations exist, the number of dialogue fragments collected by IoT devices that can be used to construct the multimodal corpus far exceeds the number of dialogue fragments collected by other means. At the same time, we can also use IoT devices to selectively collect the dialogue fragments we need from situation comedies, so that the collected dialogue fragments can have clear picture quality, and can ensure that there are fewer characters and less background noise in the dialogue than those directly collected from daily life. Computational Intelligence and Neuroscience Construction of Multimodal Corpus. Since it takes too much time and energy to collect dialogue fragments for constructing the multimodal corpus using IoT devices from daily life, we can also collect dialogue fragments for constructing the multimodal corpus through IoT devices from situation comedies. Most situation comedies start with creating opposites and contradictions and end with solving contradictions and reaching reconciliation. Following the procedure of constructing a corpus introduced in Section 3, we choose the Biography of the Naive WuLin, which premiered in 2019, as the data source to construct the multimodal corpus. Compared with traditional Chinese situation comedies, this situation comedy has clearer picture quality, more obvious emotional characteristics, and less background noise, and few other characters speak at the same time when one character speaks. According to the information specified in the annotation template, some data of the constructed corpus are shown in Table 2 Analysis of Multimodal Corpus. is constructed multimodal corpus contains 5541 utterances, 330 dialogues, and 25 speakers. Figures 4 and 5, respectively, show the proportion of each sentimental type and each emotional type. In the distribution chart of sentimental proportion, neutral and negative are the two sentiments with the largest proportion, accounting for 39.67% and 38.78%, respectively. In the distribution chart of emotional proportion, neutral and joy are the two emotions with the largest proportion, accounting for 34.60% and 19.15%, respectively. Figure 6 shows the speech proportion of each role in the corpus. Xiangyu Tong, Furong Guo, and Zhantang Bai speak the most frequently, accounting for 22.72%, 18.01%, and 16.96%, respectively, which is consistent with the role status in situation comedies because the protagonists speak more frequently in situation comedies. e average consistency of sentimental annotation in this constructed multimodal corpus is 68.5%, and the average consistency of sentimental annotation in this constructed multimodal corpus is 59.5%. Computational Intelligence and Neuroscience Results Due to the limitation of space, this paper uses the constructed corpus to train the deep learning model for sentiment and analyzes the experimental results. Figure 7 shows the experimental results of the fusion of text and audio modes. e accuracy of the prediction results after the fusion of the two modes of text and audio decreases to a certain extent compared with the accuracy of the prediction results of text mode. Figure 8 shows the experimental results of the fusion of text and vision modes. e accuracy of the prediction results after the fusion of the two modes of text and vision is improved to some extent compared with that of the two modes of text and vision. Figure 9 shows the experimental results of the fusion of audio and vision modes. Compared with the prediction results of audio and vision modes, the accuracy of the prediction results after the fusion of audio and vision modes is improved to a certain extent. e above prediction results show that the accuracy of prediction results will decline after the features of audio mode and text mode are fused, which reflects that the features of audio mode and text mode have a certain conflict. rough the analysis, we find out the reasons for the above results: in the dialogue fragments collected by IoT devices from situation comedy Biography of the Naive WuLin, the speakers do not use Mandarin when speaking but use dialects from all over China. is is what we neglect when constructing the corpus. Due to the particularity of dialect pronunciation, there will be a certain conflict between the features of audio mode and text mode. rough the above experimental results, we should note that when collecting multiparty dialogues by IoT devices for constructing multimodal corpus, we should choose the multiparty dialogues in which the speaker speaks with standard accent. Discussion Our sentiment analysis model can be applied to the surveillance systems and can improve the effect of surveillance systems. e types of the surveillance system include but are Computational Intelligence and Neuroscience not limited to empty nesters companion system and security system based on sentiment analysis. e surveillance system uses the camera and other IoT devices to obtain the video in real time and transmits the video to MEC servers. e servers analyze the sentiments of the characters in the video and take corresponding actions according to the analysis results. Empty Nesters Companion System. e system can collect and obtain the sentiments of empty nesters through IoT devices such as camera equipment, sensors, and artificial intelligence technologies such as our sentiment analysis model, extract and classify the sentimental features, and transmit it to MEC servers for sentimental analysis. en, computers give feedback according to the analysis results. e feedback forms include but are not limited to voice, image, or action. e system can use robots as the carrier to make empty nesters interact well with the system, reduce their loneliness, and increase their happiness in daily life. Security System Based on Sentiment Analysis. In stations and airports, the security system based on sentiment analysis can collect the sentimental information of every tourist who passes through the security gate in real time. e system can automatically check out the people with high possibility of crime by analyzing his sentiment in a short period of time, so as to prevent criminal violations. Conclusion In this paper, we propose a procedure of constructing a multimodal corpus for multiparty dialogues collected by IoT devices. We construct a multimodal corpus on MEC servers according to the procedure using the multiparty dialogues collected by IoT devices. At the same time, we build a sentiment analysis model and train the model for sentiment on MEC server using the constructed multimodal corpus. According to the experimental results, we find that when collecting the multiparty dialogues by IoT devices used to construct the multimodal corpus, the speakers in the collected multiparty dialogues should preferably use the standard accent when speaking, which can improve the effectiveness of the constructed multimodal corpus. In future, we will try to use IoT devices to collect multiparty dialogue from daily life to build a multimodal corpus and continue to improve the details of the sentiment analysis model. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
6,939.8
2022-08-05T00:00:00.000
[ "Computer Science" ]
IIT (BHU) Submission on the CoNLL-2016 Shared Task: Shallow Discourse Parsing using Semantic Lexicons This paper describes the Shallow Discourse Parser (SDP) submitted as a part of the Shared Task of CoNLL, 2016. The discourse parser takes newswire text as input and outputs relations between various components of the text. Our system is a pipeline of various sub-tasks which have been elaborated in the paper. We choose a data driven approach for each task and put a special focus on utilizing the resources allowed by the organizers for creating novel features. We also give details of various experiments with the dataset and the lexicon provided for the task. Introduction Shallow Discourse Parsing (SDP) is a linguistic task that identifies semantic relations between a pair of lexical units in a piece of discourse. Discourse relation is defined by three entities: a connective, a pair of lexical units between which the relation exists and the type or sense of relation between them (Xue et al., 2016). The discourse relations can be explicit, in which relations are expressed by certain words or phrases, or implicit, where words are not directly used to convey the relation, but instead, the meaning is implied. These words or phrases which convey the existence of a discourse relation directly are called connectives. The lexical units between which relation exists, could be a pair of clauses, a pair of sentences or even multiple sentences which can be adjacent or non-adjacent. These are called arguments. A discourse treebank called the Penn Discourse TreeBank or PDTB (Prasad et al., 2008) serves as the gold standard for this task and is used as training data. The output of our system follows the same format as PDTB. Development data is also provided to perform experiments on the system. Phrase structure and dependency parses of both the training and development data have also been provided to assist in the task. Further details of the Shared Task can be found in the overview paper (Xue et al., 2016). Final evaluation of the parser is on test and blind data sets through TIRA platform set up by (Potthast et al., 2014). Besides automating the submission and evaluation system, TIRA also has provision for plagiarism detection, author identification and author profiling. The SDP task can be broadly classified into two categories of explicit and non-explicit relation detection. We discuss the pipeline for explicit parser in section 2 and non-explicit parser in Section 3. Various results and experiments carried out are reported in the relevant sub-sections. These results are based on individual stages without error propagation from previous stages, unless specified otherwise. We report results on test and blind datasets and conclude our work in Section 4 and 5 respectively. Explicit SDP Identification of explicit discourse relations consists of several stages. First stage is the detection of discourse connectives in the text. This connective binds the arguments syntactically and semantically (Prasad et al., 2008) which is helpful in feature creation for the following tasks of argument position detection and argument span extraction. Once the arguments of the relation are extracted, we perform sense classification of the relation. Connective Detection This is the first stage of the parser which detects the existence of discourse connectives in the text. The input to this stage is raw text and we analyze the entire text for the presence of connectives which could form a discourse relation. Around 100 connective spans have been identified upon extensive research by the team that annotated PDTB (Prasad et al., 2008). However, the occur-rence of these words does not guarantee that it will form a discourse connective as can be seen in the following example: My Father once worked on that project. -'once' is a non-discourse connective You cannot change your statement once it comes out of your mouth. -'once' is a discourse connective Here the connective 'once' acts as a discourse connective based on the context. A string matching script is not sufficient for this task and we therefore use Maximum Entropy Classifier to identify whether a potential connective keyword actually forms a discourse relation or not. This task has been sufficiently mastered and high F1 scores have been reported by previous teams. Mostly syntactic features have been used for this classification task such as Connective, connectivePOS, Pre-vWord, PrevPOSTag, PrevPOS + connectivePOS, nextWord, nextPOSTag, nextPOS + connective-POS, root2Leaf, root2LeafCompressed, leftSibling, rightSibling, parentCategory. These features have been borrowed from previous work of ( Wang and Lan, 2015). Argument Labeler After identifying the discourse connective span present in the input text, we need to locate the relative position of the arguments w.r.t. the sentence containing the connective. Arg2 is taken as the argument which occurs in the same sentence as the connective and is therefore syntactically associated with it (Prasad et al., 2008). Hence, we identify the position of Arg1 relative to Arg2 and the connective. The Argument Labeling task can be divided into the following sub-tasks: • Identifying the relative position of Arg1 w.r.t. Arg2 (and the connective) • Extracting clauses which are potential argument spans • Classifying the candidate clauses into Arg1, Arg2 or Null Argument Position Classifier We need to identify whether arguments are located in the same sentence (the SS case) or in a sentence before the connective (the PS case). We ignore the following sentence (FS) case and the non-adjacent PS case since these types have a small percentage of instances. Features used for Argument Position Classifier are connectiveString, connecivePOS, connecive- Position,prevWord,prevWord+connecive,prevPOS+connecivePOS,prev2Word,prev2Word+connecive,prev2POStag,prev2POS+connecivePOS. The feature names are self-explanatory. Connective string itself is a very good feature for this stage. For instance, when the connective token is 'And' (with first letter capitalized), there is a continuation of an idea from previous sentence and thus Arg1 is likely to be in PS. Whereas, when the first letter of connective is in lowercase such as 'and', Arg1 is very likely to be the clause on the left-hand side of 'and', making Arg1 in SS as connective. Connective position, which takes the values 'start', 'middle' and 'end' is also a very useful feature. This argument position classifier is trained using Maximum Entropy Classifier. Argument Span Extractor This stage of the pipeline extracts the span of the arguments from the sentence or sentences containing the discourse relation. To extract arguments, we first break the sentence into clauses. Two methods have been proposed in literature to carry out this task: Lin's tree subtract method (Lin et al., 2014) and Kong's constituency based method (Kong et al., 2014). According to (Kong et al., 2014), Kong's constituency based approach outperforms Lin's tree subtraction algorithm. However, since Kong's method is based on using the connective node in the parse tree as the base node for recursion, we can only use this method for those sentences which contain the connective. Hence, we use Kong's extraction method for Same Sentence Argument Extraction. SS Argument Extractor: Kong's constituency-based approach is a recursion in which the connective's lowest tree node is chosen as the target node, and its left and right siblings are chosen as candidates for arguments. The target node is updated to the current target node's parent and the process is repeated. There is a slight modification in the algorithm for multi-word connectives. Similar to Kong et al's approach for multi-word connectives, we choose the immediate left siblings of the first word in the connective and immediate right siblings of the last word of the connective as candidate arguments in addition to taking left and right siblings of the lowest node that covers the entire connective. This modification of algorithm for multi-word cases is important as the modified algorithm extracts more refined constituents from the sentence. In the following example, the updated algorithm extracts 'the New York market opened' as a constituent, whereas the algorithm without multi-word case would not have extracted it at all. Consider the following example with its goldstandard parse tree as shown in Figure 1: (1) In the center of the trading floor, chief trader Roger Streeter and two colleagues scrambled for the telephones as soon as the new York market opened -plummeting more than 60 points in the first few minutes. Argument candidates detected are: 'In the center of the trading floor' , ',' , 'chief trader Roger Streeter and two colleagues' , 'scrambled' , 'for the telephones' , 'the New York market opened' , '-' , 'plummeting more than 60 points in the first few minutes' , '.' The final extracted arguments are: Arg1 -'In the center of the trading floor, chief trader Roger Streeter and two colleagues scrambled for the telephones' Arg2 -'the New York market opened' In this case, we take the entire previous sentence as Arg1. Arg2 is taken as the sentence containing the connective after subtracting connective tokens from the sentence. For this task, we can also use Lin's clause tree subtraction method (2014) to extract Arg1 and Kong's constituency based approach (2014) to extract Arg2 for better performance. Argument Classification Features used for classification of extracted phrases into arguments have been borrowed from previous works of Kong (Kong et al., 2014) and Wang (Wang and Lan, 2015). These features are used to classify each candidate into one of the three categories: 'Arg1', 'Arg2' or 'Null'. Both connective and constituent-candidate based features are used: Connective String, POS tag of the connective, leftSiblingNo is the number of left siblings of the connective, rightSiblingNo is the number of right siblings of the connective, ConnCat is the syntactic category of the connective which takes the values 'subordinating', 'coordinating' or 'discourse adverbial', clauseRelPos is position of the constituent candidate relative to the connective which takes the values 'left' , 'right' or 'previous', clausePOS is the POS tag of the constituent candidate, clauseContext is the context of the constituent, i.e., POS combination of the constituent, its parent, left sibling and right sibling (when there is no parent or sibling, it is marked as NULL), conn2clausePath is the path from connective node to the node of the constituent. Once, the classifier tags the clauses with its labels, Arg1 and Arg2 are obtained by stitching the strings of ordered arg1 clauses and arg2 clauses respectively. Explicit Sense Classification After determining the spans of Arg1 and Arg2, we feed these arguments into the next stage of the pipeline which detects the sense of the explicit discourse relation. The connective string itself is a good indicator of the sense of the relation due to lexical mapping between them. However, there are cases of ambiguity, in which a connective word is used to describe multiple senses. For this reason, the task requires machine learning in which the classifier uses other syntactic features to determine the sense of the relation. The features we used for training are connec-tiveString, connectiveHead, connectivePOS, con-nectivePrev, connectivePosition, connectiveCategory, subjectivityStrengthArg1, subjectivityS-trengthArg2, verbNetClassArg2 and verbNetClas-sArg2. Subjectivity Strength and VerbNet class features are created from the semantic lexicons provided for the shared task and are described in the next section. From the results in Table 3, we note that Naive Bayes performs better than MaxEnt classifier. We conjecture the cause for this is that MaxEnt classifier tends to overfit the data. Hence, Naive Bayes is chosen to perform sense classification. Non-Explicit SDP There are three types of Non-Explicit relations: Implicit, EntRel and AltLex. The remaining pair of sentences which did not contain explicit connectives are fed into this stage of the pipeline. Our system now treats all the remaining adjacent sentences as Implicit relations. This hard coding cost us a high performance dip as EntRel relations con-stitute about a third of the non-explicit data (215 EntRel relations and 522 Implicit relations in development data) and not all remaining sentences contain an implicit relation. In our implicit argument span detector, we treat the first sentence in adjacent sentence pair as Arg1 and the second sentence as Arg2. Next, we focus on the Implicit Sense Classification task. Implicit Sense Classification This task is considered as the bottleneck of SDP systems and is especially challenging due to lack to connective based features. We create a different set of baseline features as borrowed from (Lin et al., 2014). We describe semantic features used for this task in detail in this section. Baseline features Baseline features chosen for this task are syntactic features created from the dependency and constituency parses of the two arguments. First, both dependency and constituency parses from the entire training corpus were extracted. This created around 12,489 constituency parses and 89 dependency parses. However, the number of parse features was too high and unnecessary to work with. Hence, we put a frequency cap of 5 on the feature set which brought the features down to 2,515. We also used NLTK's stop-words to filter out dependency parses created by common words as these parse rules are highly recurring over the distribution of the entire corpus. Semantic Features Sense detection is essentially a semantic task since we are trying to determine the "meaning" of the relation. For this reason, we have experimented with semantic tools like MPQA Subjectivity, VerbNet classes and Word2Vec. Each of these tools and lexicons have been provided for the closed track of Shallow Discourse Parsing. MPQA Subjectivity: The semantic feature created using MPQA subjectivity lexicon measures the negativity and positivity strength of the arguments. For calculating the subjectivity strength of the arguments, subjectivity annotation for each word of the argument is taken. If the word has negative and strong polarity, it is assigned -2, for negative and weak polarity it is assigned -1, for strong positive polarity +2 and for weak positive polarity +1 respectively. The subjectivity strength of all words in the argument is summed up. If the sum is 0 then it is neutral, otherwise it is positive or negative. VerbNet Classes: VerbNet is a verb lexicon with mappings to WordNet and FrameNet. VerbNet is organized into classes (with subclasses) on the basis of syntactic and semantic similarity (Kipper et al., 2006). We have created verbNetClassArg1 and verbNetClassArg2 features, which contain the VerbNet class of the lemmatized forms of the main verbs of the respective arguments (Zhou, 2015). VerbNet classes are important features and this was verified by analyzing the most informative features of this classification task. We find that many VerbNet classes are more informative than even baseline features. Word2Vec: Subjectivity strength and VerbNet classes only capture information about specific albeit important words of a sentence. To capture the context of the entire argument and interaction between the arguments, we use the Word2Vec tool. Word2Vec is a deep learning tool that outputs vector representation of an input word in a large-dimensional vector space. We have used Google's Word2Vec model trained on a part of Google News dataset of 100 billion words. This Word2Vec model contains 300 dimensional vectors for 3 million words and phrases. Words with similar meaning are expected to have vectors in close proximity in the vector space (Mikolov et al., 2013). Inspired from (Yih et al., 2013), a work on Question-Answering system, we represent the entire Arg1 and Arg2 as vectors. We take each argument, drop the stopwords and then take a weighted sum over the vector representations of remaining words of the argument. Even after removing stop words, there is a difference in importance and relevance of the remaining words. This is why we choose to take a weighted sum of the word vectors. We chose TF-IDF (Term Frequency-Inverse Document Frequency) scores as weights. The TF-IDF value increases proportionally with the number of times a word appears in the document and decreases with the frequency of the word in the corpus. This balances the weights of words which occur more frequently in literature. We perform PCA (Principle-Component Analysis) over the Arg vectors to reduce the vector dimensions from 300 to 3 as the depth of sense classes is also three or less. By intuition, we only require three dimensions to represent the three levels of sense classes. We perform K-Means clustering over Arg vectors of the training data and assign clusters to Arg1 and Arg2 of development, test and blind data as Arg1Cluster and Arg2Cluster. We used sklearn's TfIdfVectorizer to compute the TF-IDF scores and sklearn's PCA and K-Means to perform clustering over the vectors. The cosineDistance feature is a dot product of Arg1 and Arg2 vectors. We hope to capture the similarity or closeness of the two arguments using this numerical value. Following is the formula used for calculating cosine distance: Here, d is the cosineDistance and n is 300, the dimension of the vector space of GoogleNewsvectors-negative300.bin, the word2vec model trained on Google News dataset. Experimentation We used a combination of the features described above to gauge their performance on sense classification task. VerbNet and Subjectivity features are known to perform well according to previous literature. Hence, we test the novel Word2Vec features on top of baseline features, Subjectivity strength and VerbNet classes. For this reason, we call the combination of baseline features, Subjectivity strength and VerbNet classes as baseline in Table 4. The results reported the Table 4 are on development dataset. As expected, Word2Vec features improve the F1 score by about 2.3%. Thus, we use a combination of all the baseline features, Subjectivity features, VerbNet features and Word2Vec features in the Implicit Sense classification task. Also, the number of parse features is very high (2515), making total number of features equal to 2522. Therefore, we use NLTK's Naive Bayes Classifier over Maximum Entropy as NLTK's implementation of Maximum Entropy is not able to handle the vast number of features. Table 2 contains the results of our updated system on development, test and blind datasets. In the updated system, we fixed a small bug in argument index alignment code which doubled our overall parser F1 score on the development data. Hence, we report the updated results in the paper. We also used Word2Vec features in our updated system. The Word2Vec features did not improve the F1 score of Implicit Sense Classification on development and test datasets. This is probably because of error propagation from previous stages. Surprisingly, the updated Implicit Classifier performs better on blind dataset as compared to development and test dataset. There are several weak links in our pipeline. For instance, the PS-Explicit and Implicit argument extractors are naive and hard-coded. This is one major cause of low F1 scores as compared to Wang et al. We feel that by fixing these links, we can improve the result by a significant margin. Conclusion In this work, we have implemented a discourse parser trained on PDTB corpus with a special focus on using semantic lexicons. We have described the system architecture and various experimentation results in the paper. Our contribution to the SDP system is the introduction of novel features to the bottleneck of SDP systems, i.e., the Implicit Sense Classification task. Specifically, we have created Arg1Cluster, Arg2Cluster and cosineDistance features using Word2Vec tool for Implicit Sense Classification task, which improved F1 score of the task by about 2.3%. The task of Shallow Discourse Parsing will give more promising results by making use of other lexical and semantic tools, thus encouraging further research to obtain better results.
4,445
2016-08-01T00:00:00.000
[ "Computer Science" ]
A robust algorithm for rate-independent crystal plasticity A new stable return-mapping algorithm enables crystal-plasticity solutions by using a regularized yield surface with very large exponents, for which the rate-independent limit of the Schmid assumption in practice is reached. Numerical stability is enabled by an improved initial guess for the stress solution and by applying a line search for each Newton iteration. A hypo-elastic–plastic corotational formulation is chosen, where the tensors are contracted in a way that naturally degenerate to the rigid plastic formulation. The consistent algorithmic tangent modulus is derived, and a fast and very stable open-source implicit implementation into a finite element software is explained and demonstrated for simulations of the necking of a single crystal and for deformation of a polycrystalline representative volume element. The simulations run stable allowing large time steps. Hence, the simulation times are significant shorter than for explicit finite element simulations. The framework enables use of arbitrary types of slip systems. As a demonstration, the importance and interpretation of the yield surface exponent and the asymptotic limit of very large exponent are discussed for bcc crystals with { 110 } ⟨ 111 ⟩ , { 121 } ⟨ 111 ⟩ and { 132 } ⟨ 111 ⟩ slip systems. Introduction The detailed modeling of metal plasticity, e.g. by finite element codes, requires a crystal-plasticity framework that can efficiently incorporate arbitrary slip systems as well as account for crystal elasticity.In practice, compromises must be made between model complexity and calculation time.This work will be limited to the formulation and demonstration of a stable numerical algorithm for a rate-independent crystal plasticity, without complex latent hardening of the slip systems [1], and without non-Schmid effects, see e.g.Soare [2].However, the framework is not limited to these simplifications. The starting point for rate-independent crystal-plasticity theories is the Schmid assumption, [3].Mathematically, this can be expressed as a multi-surface formulation with one yield criterion for each shear stress τ α , with a corresponding critical resolved shear stress, τ c α , on each slip system, α. With infinitely many slip systems, the criteria would correspond exactly to the isotropic Tresca yield criterion.However, slip is restricted to certain crystallographic planes and the densely packed atom directions in the crystal. Hence, the yield stress of a crystal is highly anisotropic, where the inner envelope of the Schmid criteria defines a crystal yield surface with sharp corners. At sufficiently low temperatures in densely packed crystal structures, dislocation glide will occur on the densely packed atom planes, due to their low Peierls barrier.However, at elevated temperatures the Peierls-Nabarro contribution to the critical resolved shear stress for glide on non-densely packed slip planes decreases, enabling slip to occur also on these more narrowly spaced slip planes.However, in non-densely packed structures without densely packed slip planes, e.g.ferritic steel with BCC structure, several slip systems with their respective critical resolved shear stress, will compete, even at room temperature.Employing additional slip systems results in a more complex crystal yield surface with more corners and facets.In hcp metals there are only three slip systems available in the densely packed planes, hence also here slip in less densely packed planes must be included in a model [4,5].Furthermore, twinning can be incorporated as pseudo-slip systems [6,7].To handle mathematically all mentioned cases, critical resolved shear stresses for slip to occur on different slip planes will be distinguished and be allowed to have individual work hardening in the model considered in this work. The rule of normality i.e. associated flow, where the plastic rate-of-deformation is normal to the yield surface, holds for each facet on the crystal yield surface.However, in the rate-independent limit, several solutions meet in a yield-surface corner, and the Taylor ambiguity occurs.In the rate-independent theories, a variety of ambiguity solutions have been suggested, as discussed in the review by Mánik and Holmedal [8]. In general, the critical resolved shear stress depends on the shear rates of the slip systems, but at room temperature the strain-rate sensitivity is low, and the metal is commonly assumed to be rate-independent, which will be assumed in this work.However, the rate-independent models will always be simplifications of rate-sensitive models, hence it is important to understand the simplifications being made.Firstly, it is important to distinguish the instant strainrate sensitivity from the strain-rate sensitivity that influences the work hardening and consequently needs a certain amount of strain to change the critical resolved shear stress.The latter can be captured by a rate-insensitive model. The origin of the instant strain-rate sensitivity is that the critical shear stress of a slip system depends on the shear rate of the same slip system.The popular viscoplastic power law [9,10] in Eq. ( 2) is an example, for recent CPFEM applications, see e.g.[11][12][13][14][15]. In this model there is no threshold for the critical resolved shear stress, and τ α = τ c α is the resolved shear stress.Furthermore, τ 0 α and γ0 are constants (that can be strain dependent) and m is the instant strain-rate sensitivity.For the CPFEM with the viscoplastic model, efficient implementations [16][17][18][19][20][21][22], comparison studies of different algorithms [23][24][25] and an extensive review [26] have been reported.Even higher numerical efficiency has been achieved by spectral solvers utilizing the fast Fourier transform, but then limited to cyclic boundary conditions [27][28][29][30][31][32][33].However, the viscoplastic model equations are increasingly difficult to solve numerically for small values of the strain rate sensitivity, m.So far, most of the numerical algorithms for solving the viscoplastic equations have not been optimized for dealing with the rate independent limit m → 0. The speed and stability of the CPFEM and spectral implementations worsen as m decreases.One approximation used to deal with small m, is to first perform expensive calculations for the crystal in the crystal coordinate system, and then map these solutions by a spectral representation, as suggested by Knezevic, Al-Harbi and Kalidindi [34] and applied in CPFEM by Zecevic, McCabe and Knezevic [35].Another approach to deal with arbitrarily small m, was suggested by Knezevic, Zecevic, Beyerlein and Lebensohn [36], by first calculating solutions by a relatively large m ≈ 0.05, and then obtain solutions for lower values of m by a scaling relation that applies to the viscoplastic law.As pointed out by Mánik and Holmedal [37] for the case of 12 slip systems in fcc, the slip systems that are most activated, in most cases do not change for m ≤ 0.1.Since the 56 corner solutions in fcc are quite well separated, this method works for most combinations of a given strain path and grain orientation.However, when more slip systems are activated, e.g. in bcc, some of the corners will disappear with lower values of m.Anyhow, even with m = 0.05, the time steps that can be made by implicit finite element integration are limited, and the line-search and choice of the initial guess, as proposed in the current work, will be very beneficial. The instant strain-rate sensitivity, m, influences the crystal yield surface in two different ways.Firstly, it rounds off the corners of the crystal yield surface, which for rate-dependent models can be defined as the iso-surface with constant internal work and internal work rate.Note, that even when the strain-rate sensitivity is very low, say at room temperature, the round-off of the corners may still be significant.Secondly, the rate dependency causes the crystal yield surface to expand with increasing strain rate.At room temperature this expansion is very weak for most metals and justifies the use of a rate-independent yield surface. However, the rounded corners must still be accounted for, as argued by Holmedal [38].Eq. ( 2) can be derived from a potential [38,39] proportional to the internal work rate and shape invariant for different internal work rates.As pointed out by Holmedal [38], an iso value of this potential corresponds to the regularized yield surface proposed by Gambin and Arminjon [40][41][42][43][44]. Since the plastic strain rate equals the gradient of the potential, the associated flow rule then applies.Hence, in the limit of m → 0, this model degenerates to a solution of the rate-independent Schmid model.However, since the Schmid model suffers from the Taylor ambiguity, this rate-independent solution is not unique.The physical interpretation of that is that different models for the strain-rate dependency gives different ambiguity solutions. To avoid the Taylor ambiguity and the corresponding singularities, the regularization by Gambin/Arminjon has been applied in many rate-independent crystal-plasticity finite element model (CPFEM) implementations [45][46][47].Alternatively, a regularized yield surface, based on the approach by Kreisselmeier and Steinhauser [48], has also been commonly applied [49][50][51][52][53][54].Note that in the limit of large yield surface exponents, this approach becomes similar as the yield surface by Gambin and therefor degenerates to the same Taylor ambiguity solution, i.e. the one corresponding to the viscoplastic power law in Eq. (2). Another rate-dependent model is the viscous over-stress model. Here τ 0 α is a true athermal yield stress and η −1 is the viscosity of the metal.In the limit η → ∞ this model degenerates to a solution of the rate-independent Schmid model.Note, that this corresponds to another ambiguity solution than the viscoplastic power law in Eq. (2).Implementations have been made by Schmidt-Baldassari [55] and by [55,56], in which a minimization of an augmented Lagrangian is used to approximately obtain the rateindependent limit (η → ∞) of the viscous assumption in Eq. ( 3).Following the arguments by Mánik and Holmedal [8], this ambiguity solution must correspond to the one obtained by either quadratic programming [8,57] or by singular value decomposition.[58,59]. Not all ambiguity solutions have a physical interpretation.Many are simply efficient mathematical means to obtain a well-posed, non-singular mathematical problem.For an overview of various approaches, the reader is referred to recent reviews, [37,56,60]. In models for predicting texture evolution during fabrication, e.g.rolling or extrusion, rate-independent statistical aggregate models are useful, i.e. the classical full-constraint Taylor model [61], and the more recent advanced Taylor type models; the ALamel model [62], the ALamel3 model [8] the GIA/RGC model [63,64] and rateindependent self-consistent models, e.g.[65][66][67][68].In texture-prediction applications, elasticity is not important and plastic formulations without elasticity are commonly applied.A newly proposed way of contracting tensors [69] will be applied in this paper, where a rigid plastic formulation will follow naturally as a special case. Even with the computer capacity available today, CPFEM simulations are challenging.The coupling between the elements leads to a large system of non-linear coupled equations to be numerically integrated one time-step at a time.This can be obtained, using either explicit or implicit finite element methods.Due to the stiff nature of the involved system of equations, explicit time stepping is restricted to very small time-steps, even with carefully use of mass scaling.In explicit CPFEM solvers, a major part of the computer time required for the calculations is related to calculating locally for each integration point the stress tensor and the lattice rotation for a given time step, i.e. as prescribed in a user-defined subroutine.However, with implicit time stepping, most of the computer time is spent solving the global finite element equations.To compete with explicit numerical integration schemes, the implicit schemes must be sufficiently stable to allow order of magnitude larger time steps.A stable return mapping algorithm is the key. At room temperature, a realistic exponent for this yield surface is orders of magnitude larger than for highexponent yield surfaces applied in continuum plasticity.Numerical convergence of the return mapping algorithm is more and more difficult with increasing yield surface exponents and has until now been a major numerical challenge.However, recent progress has been reported within continuum plasticity, reporting stable return mapping algorithms, using a line-search approach [69][70][71][72][73]. In the current work, these algorithms are further developed and applied for the Gambin/Arminjon regularized crystal yield surface, enabling for the first time an implicit return mapping algorithm, allowing stable, effective calculations of the rate-independent limit with yield surface exponents as large as one million and strain steps as large as unity.Physical mechanisms, modeled by various types of slip systems, such as twinning, phase transformations, latent and reverse hardening etc., have not been included here.At the current stage, the paper presents a proof-of-concept of a rate-independent framework that enables this, without compromising numerical robustness. The UMAT developed in this work can be freely downloaded from the following link: https://gitlab.com/ntnu-physmet/crystal-plasticity Regularized single crystal plasticity model Two coordinate systems will be considered for expressing vectors and tensors of the crystal plasticity model.The global (sample) system has basis vectors e i , while the co-rotated (crystal) coordinate system has basis vectors êi , i = 1, 2, 3, that coincides with the crystal lattice after deformation.The orthogonal transformation tensor from the global to the crystal coordinate system is denoted Q = R T .The transformation rules read where v is a vector and T is a second-order tensor.The initial orientation of the crystal coordinate system is given by the initial transformation matrix Q 0 = R T 0 , which can be calculated for a given set of Euler angles.The imposed velocity gradient L is given in the global system, while the constitutive equations are formulated in the co-rotated crystal system.The hypoelastic approach is employed, with additive decomposition of the rate-ofdeformation tensor.The hypoelastic-plastic models are typically used when elastic strains are small compared to the plastic strains.Except for some cases of complex elastic-dominated closed-loop cyclic loading, the non-conserved energy is negligible and the hypoelastic description is adequate. In the co-rotated system D = sym ( L) is split into its elastic and plastic parts The rate of the co-rotated Cauchy stress is given by Hooke's law σ = Ĉ : De = Ĉ : where Ĉ is the fourth-order elastic stiffness tensor, given in the co-rotated system.The unity slip direction vector bα and the unity slip plane normal vector nα for each considered slip system α define the Schmid tensor Mα in the co-rotated system. The plastic rate-of-deformation tensor, Dp , is related to the symmetric part of the Schmid matrix, Pα = sym ( Mα ) as where γα is the slip rate on slip system α. Following Holmedal [38], the strain-rate independent regularized crystal yield surface is employed here, where ϕ ( σ ) is the yield function.The plastic rate-of-deformation tensor obeys the normality rule where ϕ σ = ∂ϕ/∂σ is the gradient of the yield function.When the exponent n is large, the parameters ξ α may be set to unity.It follows from Eqs. ( 8) and (10), that on the yield surface, i.e. for f = 0, Due to that f is a homogeneous function of the first order, it follows that where Ẇ is the plastic work rate. To determine the single crystal rotation, the total spin tensor Ŵ = skw ( L) is additively decomposed into its elastic, plastic, and lattice-rotation parts. Here, Ŵ p is the plastic spin tensor.The elastic deformations that contribute to the elastic spin tensor Ŵ e , are very small and neglected here.The constitutive lattice spin, Ŵ c , generates the lattice rotation.The origin of the plastic spin is the contribution to the spin from the shape-change caused by the slip activity, i.e. The skew-symmetric part of the Schmid tensor Ωα = skw( Mα ) and γα are the slip rates.Hence, the constitutive spin can be estimated as The constitutive spin tensor dictates the crystal rotation, according to The work hardening of the critical resolved shear stresses is assumed to be functions, τ c α (Γ ), that depend on the accumulated slip Γ , which is defined by the differential equation In this paper, a simple model for the work hardening [74] is applied for demonstration where h α (Γ ) is the hardening moduli for each slip system.In this case, it can be integrated as the Voce equation Note, that replacing || by ⟨⟩, where ⟨⟩ denotes the Macaulay brackets, allows for distinguishing forward and backward slip activities as explained by Holmedal [38].This is necessary to model Bauschinger effect on the slip system level [75][76][77]. Rotation update The update of the rotation tensor R is given by the differential equation ( 15).An analytical solution exists for the case of a constant W and can be written using the Euler-Rodriguez formula (for details see Brannon [78]), as where R 0 = R(0).In general, the spin W changes with time.Eq. ( 19) can then be applied as an approximation during each time increment ∆t, as where W n is kept constant during the time increment.Another alternative, is the symmetric numerical second-order update scheme by Hughes and Winget (1980), which is often employed (e.g. for calculating rotation increment matrix in UMAT in Abaqus/Standard), It assumes that the spin W is known at time t n +∆t/2, and the Cayley-Hamilton theorem [79] can be applied to avoid inverting matrices [78]. When using the corotated constitutive spin Ŵ c , the integration of Eq. ( 15) reads Vector/matrix notation Symmetric second-order tensors and fourth-order tensors with minor symmetry can be mapped into a vector and matrix representation, respectively.The most widely used vector/matrix representations are the Voigt and Mandel notations.The main purpose of a vector/matrix representation is to exploit the tensor symmetries, allowing symmetric stress and strain tensors to be stored as vectors, and fourth-order elastic modulus or plastic anisotropy tensors to be stored as matrices.In implementations of computational mechanics, this type of vector-matrix representation significantly reduces the number of operations and the computation cost.In rigid plastic crystal plasticity, other notations have been used (see Mánik [69] for a recent overview). In this paper, the natural vector/matrix notation, originally suggested by Kocks, Tomé and Wenk [80] for use in crystal elasticity, is applied.This notation was recently adapted for use in continuum plasticity in a return mapping algorithm by Mánik [69].Due to its explicit representation of the deviator, this notation advantageously separates the deviatoric plasticity from the elasticity, i.e. the plastic part is equivalent to the notation proposed by Lequeu, Gilormini, Montheillet, Bacroix and Jonas [81].This notation enables a more concise algorithm formulation and, due to separation of pressure dependency, it reduces dimension of equation system and makes numerical computation more effective.Like the Mandel-but unlike the Voigt notation, it represents both stress and strain tensors equally, see Eq. ( 23).The brief description and the essentials of the natural notation is given in Appendix A. For an exhaustive description, see Mánik [69]. Return mapping algorithm At time t (n) , we consider the Cauchy stress σ (n) and the internal variables q(n) expressed in the corotational crystal coordinate system.In the backward Euler integration scheme, the total rate-of-deformation D(n+1) is required at t (n+1) in the corotational crystal coordinate system.However, the orientation of the crystal coordinate system at t (n+1) is not known, hence D(n+1) needs to be calculated.What is known in the finite element code is D (n+1/2) , which is assumed to be constant in the reference system throughout the time step.To limit the complexity of the algorithmic modulus, D(n+1) is extrapolated within the accuracy of the numerical scheme (see Appendix B).The basic problem to be solved by a return mapping algorithm, is to find the Cauchy stress σ (n+1) and the internal variables q(n+1) at time t (n+1) = t (n) + ∆t, which satisfy the Kuhn-Tucker complementarity conditions The fully implicit backward Euler return mapping algorithm is employed in this paper.In the literature, returnmapping algorithms are generally formulated in their tensorial form, while numerical implementations employ either Voigt or Mandel vector/matrix notation.In the following, the return-mapping algorithm will be expressed directly in the natural notation [69].For the sake of clarity, the hat ( ˆ) designating the corotational aspect, will be omitted in the following, for all the vectorial and tensorial quantities.Given a total rate-of-deformation vector ⃗ d (n+1) , time increment ∆t and Cauchy stress ⃗ σ (n) , the trial stress is obtained by applying an elastic predictor. Here, in the natural notation, C is a 6 × 6 diagonal matrix representing the elastic moduli with cubic symmetry. For large trial stresses, this first guess is the key to numerically stable and robust return-mapping algorithm.The parameter k controls the distance of the initial stress guess from the yield surface, for k = 1, ⃗ σ (0) lies on the yield surface, as ϕ For a given ⃗ σ (0) , the initial guess for the plastic multiplier, λ(0) is given by Eq. ( 12) The return mapping is solved using a Newton-Raphson algorithm with a line search.The solution is sought in an iterative manner as where (k) is the iteration, ∆ ⃗ σ , ∆ λ and ∆Γ are the increment of the Cauchy stress, the plastic multiplier and the accumulated slip, respectively, and α (k) is a constant to be determined by the line search algorithm.By linearizing Eqs. ( 26), ( 27) and ( 28), the increment ∆ λ is calculated as The latter is then used for calculating the increment of the accumulated slip and both ∆ λ and ∆Γ are finally used for obtaining the increment of the Cauchy stress In Eqs. ( 32), ( 33) and ( 34), the matrix L and Y are given as For the calculations above, the following partial derivatives need to be calculated (note that the resolved shear stress For convergence, measures of the three residuals ⃗ r (k) , f (k) and q (k) for iteration (k) are defined as and If for an iteration (k) then the convergence is achieved.Recommended error tolerances, used throughout this work, are ε r = 10 −20 , ε f = 10 −8 , ε q = 10 −16 . Line search In each Newton-Raphson iteration, the line search is used to determine the step size α (k) .For this, ψ (k) serves as the merit function [72].For the search direction given by the increments ∆ ⃗ σ , ∆ λ and ∆Γ , the step length α (k) needs to be found such that 0 < α (k) ≤ 1 and for which the merit function ψ (k) is minimized.In this paper, two methods for line search are employed and tested.A line search, calculating the minimum of the quadratic approximation to the merit function, is adopted here, similarly as previously applied in return mapping algorithms for continuum plasticity [69,71,72].This is very efficient method for the exponents of the regularized yield function up to ∼100.For large exponents up to 1 000 000, the quadratic approximation of the merit function becomes too poor, leading to α (k) 's far from the optimal ones.More effective line search is developed in this paper, employing new minimization algorithm which solves more efficiently crystal plasticity models with large exponents up to 1 000 000. Quadratic line search For each iteration, the Newton step is performed as the first attempt, i.e. α (k) = 1 in Eq. ( 31).This step will be accepted only if ψ (k+1) is lower than some fraction of the merit function ψ (k) , achieved in the previous Newton-Raphson iteration. Line search by minimization An alternative approach for finding the step length α (k) for which ψ (k) ( α (k) ) < ψ (k) (0), is to find the minimum within some tolerance ε.For this, the standard and general 1D minimization Brent's method [83] can be applied.The way the function is constructed by Eqs. ( 37) and (38) gives rise to some known properties e.g. the derivative at α (k) = 0, reads ψ (k) ′ (0) = −2ψ (k) (0).A new minimization algorithm was tailor-made to utilize the known characteristics of ψ (k) ( α (k) ) making it faster than Brent's method.See Appendix C for the detailed description of this line search algorithm. Consistent algorithmic modulus The consistent algorithmic modulus is essential to calculate when the return mapping algorithm is employed as a part of an outer iteration.It must be provided as part of the user subroutine in the implicit FE solver.For the implicit backward Euler return mapping algorithm of the regularized single crystal plasticity model described above, it is calculated as in which matrix M is calculated as Convergency results Convergency behavior of the implicit, backward-Euler return-mapping algorithm with line search is examined as a function of the yield-surface exponent n.Note that according to Eq. ( 25), an arbitrary strain increment occurring during a time increment ∆t, is uniquely prescribed by ⃗ σ tr − ⃗ σ (n) .Hence, to effectively cover the space of possible strain-increment directions, an evenly distributed set of 10 000 trial stresses { ⃗ σ tr i } i=1,10000 was generated from a set of 5-dimensional vectors, approximately uniformly distributed on the 5-dimensional hypersphere, being generated by the algorithm described in Appendix D. The stress state before the strain increment to be iterated, ⃗ σ (n) can, without loss of generality, be set equal to ⃗ 0, i.e. strain path changes are also covered by this set.Each trial stress was chosen so that f ( ⃗ σ tr i ) = s, where s is a chosen constant.The magnitude of s represents the magnitude of the trial stress, which implicitly corresponds to the magnitude of the total strain increment ∆⃗ ε = ⃗ d∆t, for a given elastic modulus matrix C and a set of critical resolved shear stresses τ c α .In this manner, a large set of possible strain paths can be probed, independently of what the previous stress solution was, and arbitrary strain-path changes are included as well.To test different strain-increment magnitudes, four selected values of s = 2τ C , 10τ C , 100τ C and 1000τ C were included in the set of trial stresses.The largest amongst these trial stresses corresponds to a strain increment of order unity and in practice provides an ultimate stability challenge for the algorithm.In total, the 40 000 tested strain increments provide a large set that effectively covers the space of realistic strain increments in an implicit FE simulation. The efficiency and stability of the return-mapping algorithm will in principle depend on the chosen slip systems and their corresponding work hardening.A BCC structure with ⟨111⟩ slip systems, and an FCC structure with { 111 } ⟨011⟩ slip systems were tested.Without loss of generality, the Euler angles used were (ϕ 1 , Φ, ϕ 2 ) = (0, 0, 0).The elastic constants and the critical resolved shear stress applied for the FCC and BCC case are listed in Table 1.Both a case without work hardening and a case of strong work hardening (linear hardening with h = 1000 MPa) were tested.For each yield surface exponent, the number of Newton iterations and the number of line-search iterations were counted for the 40 000 probed strain increments. It turned out that the algorithm gave similar iteration statistics for all cases tested.Examples of the average number of Newton iterations and the average number of line-search iterations required for each Newton iteration to reach convergency, are shown in Fig. 1 for two of the cases tested.The largest system, for the BCC case with 48 slip systems, requires only slightly more iterations to converge.The results with and without work hardening are very similar.Note that for yield-surface exponents up to about 100, the quadratic line search is always faster, with less or equal number of Newton iterations and significantly fewer line-search iterations each.The simpler quadratic linesearch algorithm is competitive up to a yield-surface exponent of the order of 1000, which in practice is sufficient for a very good rate-independent approximation by the regularized yield surface.It is a remarkable result, that both line-search algorithms converge for all cases tested, up to an exponent of one million, keeping in mind that the largest strain steps included in the test set is of order unity.For these extremely high exponents, the full minimization requires significantly fewer iterations than the quadratic algorithm. For low exponents, most cases converge within a few iterations.For larger exponents, some of the tested strain steps converge fast, while other require more iterations.In Fig. 2, examples of iteration statistics for the case of the quadratic line-search algorithm are shown, for cases where n = 10, 100 and 10 6 .The FCC crystal without hardening and the BCC crystal with 48 slip systems and hardening show very similar distributions.The average number of Newton iterations, as well as the spread of the distribution, increase with increasing exponent.For the cases of n = 10 and 100, the peak is at zero line-search iteration, i.e. a full Newton step, while for n = 10 6 several repeated quadratic line-search iterations are required for most cases. Fig. 3 shows iteration statistics for examples where the full minimization is applied during the line search.Again, cases of n = 10 and 100 are compared for an FCC crystal without hardening and a BCC crystal with 48 slip systems and hardening.Unlike the cases with quadratic line search, the mean value and spread of the number of Newton iterations saturate at large exponents and the curves are similar for n = 100 and 10 6 .However, the number of line search iterations per Newton iteration slowly increases with increasing n, showing a broad peak at the largest number of iterations, as well as a narrow peak, for which convergency is accepted by one Newton step. Application to CPFEM The return-mapping algorithm was implemented into the user-material subroutine (UMAT) in the FE software Abaqus/Standard and tested for two cases covering simulation of single and polycrystal behavior of an FCC material with { 111 } ⟨011⟩ slip systems.Elastic constants for aluminum were used as given in Table 1.The initial critical resolved shear stress was 10 MPa.The hardening law by Eq. ( 17) was applied.The exponent n = 100 of the yield function was used.The tolerances ε r , ε f and ε q in Eq. ( 39), for the return-mapping algorithm convergence, were A Goss-oriented single crystal Uniaxial tension of a notched single crystal specimen with an initial Goss orientation was simulated, i.e. with the crystal cube rotated 45 • around the tensile axis.The diameter of the specimen was 10 mm, the diameter inside the notch was 6.4 mm and the notch radius was 3.6 mm.Note, that due to the single crystal's material model symmetries, only the one-eighth of the specimen was computed.The FE mesh is shown in Fig. 4a.The smooth specimen was meshed with ∼18 000 linear three-dimensional eight-nodes elements with selective reduced integration (C3D8).A smaller element size was used close to the mid-section of the specimen to ensure an accurate description of the necking.Kinematic boundary conditions were imposed to the nodes located at the end of the specimen by prescribing an axial displacement of 0.8 mm.The average time increment used was ∼0.01 s.On average 7 iterations were needed for the return-mapping algorithm to converge.Hardening constants used for this case, were R sat α = 20 MPa and ∆γ sat α = 0.15 for all α. A polycrystalline representative volume element (RVE) The second case simulated by the CPFEM, was uniaxial tension of a RVE for a polycrystalline material.The RVE was modeled as a 1mm 3 cube consisting of 30 grains and was generated in the open source software DREAM.3D,see Groeber and Jackson [86].It was discretized by 50 × 50 × 50 reduced-integration elements (C3D8R).The deformed FE model of the RVE with grain morphology is shown in Fig. 6a.Periodic boundary conditions were applied to the nodes on the exterior boundaries to ensure periodicity [15,87].The uniaxial tensile mode for the RVE is defined by prescribing boundary conditions to the nodes A, B, C and D (Fig. 5).The nominal strain reached at the end of the simulation was 40%.The von Mises stress in the RVE at the end of the simulation is shown in Fig. 6b.The average time increment used was ∼0.004 s.On average 10 Newton iterations and 7 line-search iterations for each Newton iteration were needed for the return-mapping algorithm to converge.Hardening constants used for this case were R sat α = 63 MPa and ∆γ sat α = 0.1 for all α.The simulation took 24 h. Comparison of computing times with explicit and implicit FE solvers The computational efficiency of the implicit CPFEM calculations with the new algorithms presented here, is assessed by comparing to an explicit rate-dependent CPFEM implementation, which is efficient due the use of mass scaling in combination with an adaptive sub-stepping integration scheme using the modified Euler method [22].This explicit integration scheme is extremely robust and efficient, allowing using an instantaneous strain rate sensitivity m as low as 10 −5 to explore the very nearly rate-independent stress-strain response.The case chosen for the comparison is a simulation of uniaxial tension up to 1% strain, using an RVE consisting of 60 × 60 × 60 linear elements with full integration.The details of the explicit CPFEM simulation are given in [15].Note, that the explicit CPFEM is a rate-dependent formulation, and the FE solver used for this one was LS-DYNA.The purpose here is a coarse comparison of the total computational time.When m ≪ 1, the rate sensitivity, m, corresponds to the exponent n ≈ 1/m.In both cases, the simulations were run on the same PC using all 8 cores (see Section 7).The timing results are shown in Table 2. Natural notation Numerical implementations of return-mapping algorithms involve fourth-order tensors for the elastic stiffness and for the consistent modulus.However, the implementations and numerical schemes at the end of the day must be expressed in terms of matrices and vectors.To make the tensor contractions as simple as possible to handle, the natural vector/matrix notation is applied to represent the tensors involved in the considered crystal-plasticity model.This notation has an orthonormal basis, providing all the convenient properties of the Mandel notation and overcoming the cumbersome distinguishing of stress-and strain-like tensors, as opposed to the Voigt notation.Moreover, it allows more concise algorithm formulations with higher computational efficiency [69].In this matrix representation notation, the elastic stiffness tensor for crystals with cubic symmetry has a diagonal form.Hence the double contraction in the tensorial version of Hooke's law is reduced to a simple scalar multiplication.Furthermore, it results in an explicit split of the deviatoric and volumetric parts of symmetric second-order tensors, which is advantageous when applied to classical pressure-independent crystal plasticity.Consequently, only the deviatoric part of the constitutive equations enters the plastic corrector of the return mapping algorithm, which reduces by one the system of equations to be solved by the Newton-Raphson iterative method.For the volumetric part, only the elastic predictor, and no plastic corrector, is needed.The natural notation thus enables use of the same algorithm and numerical set-up for cases with only rigid plasticity (e.g.texture calculations), as for cases requiring full elasto-plastic calculations (e.g.CPFEM). Line-search algorithms Similar as for continuum plasticity, [71], a limited convergence of the pure Newton-Raphson method is observed with the regularized crystal yield surface, even with low exponents.For the relatively large strain increments, relevant for implicit FEM as tested in Section 6, the algorithm diverged for ∼ 10% of the strain paths with an exponent n = 5, without line-search.For a given exponent n, a certain maximum strain increment, |∆ε| max exists, allowing convergence for all strain paths.Applying the parameters of the two materials in Table 1, it was found that n ∝ 1/ |∆ε| max , both for the fcc aluminum and the bcc iron.To obtain convergence for n = 100, |∆ε| max ≈ 2.5•10 −5 and 10 −4 for fcc aluminum and bcc iron, respectively.When running implicit FEM simulations, the strain increments required for the desired accuracy, would be considerable larger than that.The overall efficiency of an implicit FEM implementation relies on that the strain increments can be sufficiently large, being controlled by global accuracy rather than the stability of the local iterations in the user subroutine, since most of the computer time is spent between the user subroutine calls. To ensure stable convergence of the return-mapping algorithm presented here, the line-search algorithm plays a crucial role.Its purpose is to find a scaling α of the increment, ∆x, suggested by the Newton algorithm, i.e. x (n+1) = x (n) + α∆x.It makes sure that the scalar merit function, ψ, is always reduced compared to the previous step, i.e. ψ ( . In continuum plasticity, line search has been used in several works, see e.g.[69][70][71][72][73].In this work, two different line-search algorithms were employed and tested: the quadratic line search and line search by a minimization algorithm.The quadratic one returns the α that minimizes a quadratic polynomial, interpolating ψ(α), so that ψ(0), ψ (1) and ψ ′ (0) are exactly matched.As this approximation becomes less and less precise for large values of exponent n, the number of Newton iteration is increased (see Figs. 1 and 2). The line search by minimization returns, within the prescribed numerical accuracy, the value of α that minimizes ψ.This new minimization algorithm is a tailor-made version of Brent's algorithm, (see Appendix C).The relative and absolute tolerance, ϵ and ϵ a , control the precision of finding the minimum of ψ.In general, tight tolerances require a larger number of line-search iterations but lead to a lower number of Newton iterations.This strongly depends on the exponent n.For large n, i.e. n > 10000, the number of Newton iterations is greatly reduced, when small tolerances are prescribed.For n < 100, the number of Newton iterations is less sensitive to how precisely the minimum of ψ is estimated, and coarser tolerances are more beneficial, reducing the number of line-search iterations to be performed.The relationships ϵ = min (0.3, 1/n) and ϵ a = 10 −2 ϵ are found to work optimally.Using this, the number of Newton iterations remains almost constant for all n > 100, while the number of line-search iterations still increases gradually (see Fig. 1 and Fig. 2). A Newton iteration involves computing the Jacobian and solving a 6 × 6 linear system and is therefore about 4 times more computationally expensive as a line-search iteration.Hence, the monotonically increasing number of Newton iterations as a function of n makes the quadratic algorithm less competitive.The overall timing reveals that the quadratic line search converges faster than the line search by minimization for n up to about 500. Approaching the limit of a rate-independent solution Solutions of the rate-independent crystal plasticity, obeying Schmid's law, suffers from non-uniqueness, i.e. the Taylor ambiguity.Several attempts have been made to obtain a unique solution [37].As discussed by Holmedal [38] an equivalence exist between the rate-sensitive unique solution, using the viscoplastic law (Eq.( 2)) with strain rate sensitivity m, and the unique solution provided by the regularized yield surface defined by Eq. ( 9) with exponent n, i.e. The proposed return-mapping algorithm enables, for the first time, calculations of extreme solutions a yield surface with exponent n up to a million, corresponding to a strain-rate sensitivity m = 10 −6 .Slip solutions for 10 000 different crystals with a uniformly distribution if their orientations (same as in Section 6) were calculated for each exponent n.For each crystal and each exponent, the relative error of the solutions compared to a limit solution γ lim α can be quantified by The limit γ lim α was calculated using n = 1000 000, which in practice is equivalent to the ambiguity limit within the numerical precision.This was done for an FCC structure with { 111 } ⟨011⟩ slip systems (Fig. 7a), a BCC structure with { 110 } ⟨111⟩ slip systems (Fig. 7b), to which { 121 } ⟨111⟩ slips (Fig. 7c) and { 132 } ⟨111⟩ slips (Fig. 7d) were added.The color map in Fig. 7 shows the distribution of the relative errors for all crystals for each exponent n.Fig. 7 demonstrates the existence of a smallest exponent, for which the solutions still practically are equal to the rate-independent Taylor ambiguity limit, i.e. the same set of slip systems is activated.This is in line with previous findings [37], who showed for an FCC polycrystal, that the texture change is not sensitive to the strain rate sensitivity, m, up to a value less than ∼0.1 (or correspondingly n larger than ∼10). The results for BCC crystals (Fig. 7b, c and d) show that the smallest exponent giving this limit solution, increases with the number of slip systems.For crystals with 48 slip systems it may be as high as ∼400.This may influence texture calculations.As an example, solutions for n = 50 and 400 were calculated by the Taylor model for a rolling reduction of 90%.The ϕ 2 = 45 • section of the ODF is shown in Fig. 7e.As expected from Fig. 7d, the texture for the limit case, i.e. with n = 1000 000, was identical to the case of n = 400.However, the texture intensity distribution in this section with n = 50 is significantly different.According to Larour, Baumer, Dahmen and Bleck [88], the strain-rate sensitivities for various steel grades at room temperature may vary from m = 0.001 to 0.02 corresponding to n between 50 and 1000.If a rate-independent simulation of steel is desired, the exponent n should be chosen larger than ∼400 (m less than ∼0.0025) when accounting for the 48 slip systems.However, in many cases it is better to account for a realistic strain-rate sensitivity by choosing an appropriate exponent n in the range below 400 (m above 0.0025).Note that the scaling technique to speed up the viscoplastic calculations [36,89,90], would for bcc miss the correct corner solutions in these cases.This illustrates the need for line-search in the more general cases, to obtain an exact solution. Comparison of computing times with explicit and implicit FE solvers The model implemented in CPFEM, either as part of an implicit or an explicit solver for handling the stress balance and compatibility between the finite elements, is solved incrementally.For each increment, in the implicit approach, iterations are made to find a solution of the set of non-linear finite element equations.In the explicit approach, the dynamic inertial forces applied to the finite elements allow explicit time discretization without iterations at each time increment.Regardless of the choice of an explicit or implicit FE framework, the crystalplasticity equations are solved for each node, one time-step ahead, as prescribed inside the user subroutine (UMAT).In there, the equations may be solved without iterations in an explicit form, in a fully implicit form or by some type of semi-implicit scheme, in which only some of the variables are solved implicitly.The implicit solution is considerably more expensive, which amounts to the major part of the computational time by explicit FE solvers, whereas for implicit FE solvers the major part of the calculations is related to iterations on the balance between the elements.To compete, the implicit solvers must make significantly larger time steps.For the rate-dependent CPFEM, a thorough comparison of merits of both explicit and implicit CPFEM solvers were reported by Harewood and McHugh [24].They concluded that when the material deformation is the main part of the simulation, the implicit solver is several times faster.However, in problems with complex contact and sliding conditions, the implicit solver becomes significantly slower.Furthermore, they concluded that their implicit solver struggled to converge and required decreasing time steps with lower strain-rate sensitivities.The latter would not be an issue with similar line-search algorithms as reported here. The limited conditional stability of the explicit time integration forces the explicit FE solver to use very short time increments, leading to large number of increments to be calculated.For quasi-static problems, a careful, proper mass scaling can help increasing the time step.Then the instability locally related to the integration of the equations for each node in the UMAT subroutine, becomes the bottle neck for reducing the time step.Stability can efficiently be gained by sub-stepping in the UMAT, as reported by Zhang, Hopperstad, Holmedal and Dumoulin [22].However, with decreasingly small strain-rate sensitivity, increasingly many sub-steps will be required.Hence, it will be beneficial at some small strain-rate sensitivity, to replace the sub-stepping approach by an implicit scheme with line-search, like the one suggested here for the rate-insensitive case.Further investigations are required but are beyond the scope of this work. When using an implicit solver, the extra computing time required for the return mapping algorithm to solve the material model constitutive equations, is small compared to the time spent by the FE solver to solve the global finite element equations.The stability of the implicit scheme suggested here allows almost arbitrary large time steps and makes this the fastest alternative for e.g. the calculation of the RVE.The time increment is usually controlled by an automatic incrementation routine in the FE software and is limited by the desired accuracy rather than stability requirements.However, in some cases, contact or sliding conditions might limit the allowed time steps significantly. In non-linear crystal plasticity analysis, Abaqus/Standard uses some variant of Newton's iterative solution method.For each iteration, it is necessary to solve a set of linear equations, which for 3D problems corresponds to a matrix with dimensions proportional to the number of nodes in the power of two.The direct matrix solver in Abaqus/Standard uses a sparse Gauss elimination method for each solution of these linear matrix problems involved.This is the most time-consuming part of the implicit analysis, especially for large models.Unlike the explicit solver, the storage of this matrix during each iteration requires a lot of computer memory, which is the limiting factor for large models with current computers.However, the available computer memory has increased rapidly during the last decades, providing increasing amounts of internal RAM memory and fast serial buffer memory on solid-state disks, even on regular PCs. Conclusion A numerically stable and efficient fully implicit return-mapping algorithm for rate-independent crystal plasticity was obtained by including a line-search algorithm as part of the Newton iterations and utilizing an improved first guess for the iterations.Fast convergence of the algorithm was proved for any realistic strain step and for very high exponents of the regularized yield surface.Full stability was maintained for an exponent of one million, allowing the Schmid limit to be approached. It was found that with 12 slip systems, either in BCC or FCC, the set of active slip systems corresponding to the ambiguity solution is obtained whenever the yield surface exponent is larger than ∼10.However, for BCC with 48 slip systems, an exponent larger than 1000 will be required.The choice of the exponent is equivalent to prescribing an instant strain-rate sensitivity.To choose the correct exponent for the simulations, or alternatively to run a rate-dependent implementation with the correct strain-rate sensitivity, can therefore be an important issue in BCC texture simulations with 48 slip systems. A co-rotational hypo-elastic-plastic implementation was made into the user material subroutine of Abaqus/Standard (made available as open source).Efficient computations were demonstrated for uniaxial tension of a polycrystalline representative volume element deformed up to large strains.It is concluded, based on timing of crystal plasticity finite element simulations, that such simulations are significantly faster with the new algorithm in an implicit FE solver than with a state-of-the-art explicit formulation in an explicit FE solver. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Bjorn Holmedal reports financial support was provided by Research Council of Norway. Appendix A. Natural vector/matrix notation The six independent components of a symmetric 2nd order stress or strain tensor can be represented by a 6 × 1 vector.Correspondingly, a 4th order symmetric tensor can be expressed as a 3 × 3 matrix.The Voigt stress vector contains a one-to-one list of the stress components, ⃗ σ V = (σ 11 , σ 22 , σ 33 , σ 23 , σ 13 , σ 12 ) T .However, in the corresponding Voigt strain tensor, ⃗ ε V = (ε 11 , ε 22 , ε 33 , 2ε 23 , 2ε 13 , 2ε 12 ) T , the shear strains must be multiplied by a factor two to preserve the inner product of stress and strain.Since the Voigt notation is frequently used in the literature, the natural notation as defined by Eq. ( 23), will be related to the Voigt notation.However, the natural notation has an orthonormal basis, which makes the notation the same for both strain and stress tensors, like the Mandel notation but unlike the Voigt notation.Furthermore, the natural notation has the advantage that it decouples the hydrostatic pressure, and it diagonalizes the stiffness matrix for cubic crystals, as shown below. The transformations matrices T σ and T ε , transform the stress vector ⃗ σ V and the strain vector ⃗ ε V from the Voigt notation to the vectors ⃗ where ⃗ σ and ⃗ ε are vector representations of stress and strain in the natural notation, respectively.These transformations are related, as T σ = T −T ε .The transformation matrices T σ and T ε read Fourth order elasticity stiffness tensors are linear mappings of a second order strain tensor to a second order stress tensor, which in the contracted notations is a 6 × 6 elastic stiffness matrix, relating the strain vector to the stress vector.From the Voigt notation, they transform into the natural notation as The elastic stiffness matrix, for both isotropic and cubic symmetry, are given in the Voigt notation as where K = λ+ 2 3 µ is the bulk modulus.This is convenient for numerical computation, e.g.computing of the matrix inversion and for a matrix storage. Assume an orthonormal transformation tensor R in the cartesian orthonormal basis as R = R i j e i ⊗ e j .The coefficients R i j can be arranged in a 3 × 3 matrix R as A 2nd-and 4th order tensor, A and A, respectively, transform as ) and For expressing 2nd order tensors A and  and 4th order tensors A and  in the natural notation as ⃗ a and ⃗ â (6 × 1 vectors) and A and  (6 × 6 matrix), respectively, there exist a 6 × 6 matrix R so that The matrix R and its transpose R T read (DROT) stored as a 3 × 3 matrix, and the deformation gradient tensor with respect to the initial configuration computed both at the beginning and at the end of the increment, F n (DFGRD0) and F n+1 (DFGRD1), respectively, both stored as 3 × 3 matrices.At the end of this increment, i.e. at t n+1 , the Cauchy stress vector ⃗ σ V n+1 (STRESS) needs to be updated and the Jacobian matrix of the constitutive model, i.e. the algorithmic modulus ( ∂∆σ ∂∆ε ) n+1 (DDSDDE) needs to be computed as a 6 × 6 matrix in Voigt notation. When a user defined material model is used for continuum elements Abaqus/Standard employs Jaumann stress rate (note that Green-Naghdi stress rate is employed for structural elements in Abaqus/Standard and for any type of elements in VUMAT in Abaqus/Explicit).A local orientation is not applied (*ORIENTATION keyword is not used), hence the components of all tensors are given in the reference (global) coordinate system.Note, that with use of a local orientation, Abaqus/Standard provides the components of all tensors in the local corotated coordinate system at time t n+1 rotated from t n to t n+1 by ∆R n+ .To account for the rigid body rotations, Abaqus/Standard applies the rotation ∆R n+ 1 2 of the Cauchy stress before passed to the UMAT as σ n for the next time increment. However, the crystal constitutive equations and the stress update are calculated in the corotated crystal lattice coordinate system.Hence, the Jaumann rotation of the Cauchy stress by ∆R n+ 1 2 done by Abaqus must be undone, followed by a rotation R n into the crystal lattice system at time t n as Fig. 1 . Fig. 1.Average number of Newton iterations for FCC in (a) and BCC in (c).Average number of line-search iterations for each Newton iteration for FCC in (b) and BCC in (d).Cases with and without hardening are compared for iterations by quadratic or full minimization during the line search. Fig. 2 . Fig. 2. Line search with quadratic approximation.Distributions of Newton and line-search iterations for the test case of an FCC crystal without hardening (a), (b) and a BCC crystal with hardening (c), (d). Fig. 3 . Fig. 3. Line search with new minimizing algorithm.Distributions of Newton and line-search iterations for the test case of an FCC crystal without hardening (a), (b) and a BCC crystal with hardening (c), (d). Fig. 4 . Fig. 4. Uniaxial tension of a notched single crystal in a Goss orientation, shown half of the sample. C 11 C 12 λ is the Lamé's first parameter, µ the shear modulus and C 11 , C 12 , C 22 are cubic elastic constants.In the natural notation, both transform into a diagonal matrixC iso = Table 1 Constitutive parameters used in the convergence study. Table 2 Comparison of computing times with explicit and implicit CPFEM solvers for near-faceted (Schmid) crystal plasticity.
12,238.8
2022-04-01T00:00:00.000
[ "Engineering", "Computer Science", "Materials Science" ]
Weyl nodes as topological defects of the Wannier-Stark ladder: From surface to bulk Fermi arcs A hallmark of Weyl semimetal is the existence of surface Fermi arcs connecting two surface-projected Weyl nodes with opposite chiralities. An intriguing question is what determines the connectivity of surface Fermi arcs, when multiple pairs of Weyl nodes are present. To answer this question, we first show that the locations of surface Fermi arcs are predominantly determined by the condition that the Zak phase integrated along the normal direction to the surface is $\pi$. More importantly, the Zak phase can reveal the peculiar topological structure of Weyl semimetal directly in the bulk. Here, we show that the non-trivial winding of the Zak phase around each projected Weyl node manifests itself as a topological defect of the Wannier-Stark ladder, the energy eigenstates emerging under an electric field. Remarkably, this structure leads to"bulk Fermi arcs,"i.e., open line segments in the bulk momentum spectra. It is argued that bulk Fermi arcs should exist in conjunction with the surface counterparts to conserve the Weyl fermion number under an electric field, which is supported by explicit numerical evidence. An intriguing question is what determines the connectivity of surface Fermi arcs, when multiple pairs of Weyl nodes are present. In this work, we answer this question by showing that the locations of surface Fermi arcs are predominantly determined by the condition that the Zak phase integrated along the normal direction to the surface is π. The Zak phase is the Berry phase integrated along a straight, but closed path in the momentum space traversing the entire one-dimensional Brillouin zone [22]. More importantly, the Zak phase can reveal the peculiar topological structure of Weyl semimetal directly in the bulk. It has been shown in a previous work [23] that the non-trivial topological order of topological insulator can be directly manifested in the winding number of the Wannier-Stark ladder (WSL), which is in turn governed by the topological structure of the Zak phase. Here, we show that, in Weyl semimetal, the Zak phase winds by 2π around each projected Weyl node, creating a screw dislocation in the energy spectrum of WSL eigenstates. Eventually, these topological defects induce open line segments in the momentum spectrum of WSL eigenstates, which we call "bulk Fermi arcs." We provide an argument that the existence of bulk Fermi arcs is actually required to conserve the Weyl fermion number under an electric field, which is supported by explicit numerical evidence. Results Connectivity of surface Fermi arcs. We ask if a certain bulk property of the system can determine the connectivity of surface Fermi arcs. An answer to this question would provide valuable information to characterize Weyl semimetal without solving complicated eigenvalue equations of the microscopic Hamiltonian with an open boundary condition. To this end, let us begin by considering graphene, which is a two-dimensional Dirac/Weyl semimetal. In graphene, edge states appear depending on the edge orientation. Delplace et al. [24] proposed an idea that the existence of edge states is related with the condition that the Zak phase integrated along the normal direction to the edge is π. This can be proved rigorously for certain edge orientations, while numerically confirmed in general. Meanwhile, Mong and Shivamoggi [25] provided a related, but somewhat more general proof for the existence condition of edge/surface states in two/three-dimensional topological insulators. Specifically, they considered the Dirac Hamiltonian, which can be written as where k ⊥ and k (k = |k |) are the momenta perpendicular and parallel to the normal direction to the edge/surface, respectively. Γ is a vector composed of the gamma matrices satisfying the Clifford algebra. a is the lattice constant along k . An important assumption above is that hopping occurs only between nearest neighbors along the normal direction to the edge/surface. Under this assumption, the curve traced by h as a func-arXiv:1608.05531v1 [cond-mat.str-el] 19 Aug 2016 tion of k forms an ellipse, whose semi-major and semiminor axes are 2Re[b(k ⊥ )] and 2Im[b(k ⊥ )] with its center located at b 0 (k ⊥ ). It is proved in Ref. [25] that an edge/surface state exists at k ⊥ if and only if the projection of the h curve onto the Re[b(k ⊥ )]-Im[b(k ⊥ )] plane encloses the origin of h = 0. Moreover, the energy of such an edge/surface state is equal to the distance between the origin and the plane containing the h curve. This means that zero-energy edge/surface states occur when the origin lies within the same plane containing the h curve. For two-band models, this existence condition for zeroenergy edge/surface states can be nicely rephrased in terms of the Zak phase. In two-band models, where Γ is replaced by σ, there is a Dirac monopole with monopole strength q = ±1/2 at the origin of h = 0, generating the radial Berry curvature. Then, the above existence condition is precisely equivalent to the condition that the Berry phase integrated along the h curve is π, which is half the solid angle of an equator. In turn, this particular Berry phase is nothing but the Zak phase integrated along the normal direction to the edge/surface, i.e., γ Zak is the periodic part of the Bloch wave function in the α-th band, which can be either valence or conduction band in two-band models. Strictly, the applicability of the above existence condition is limited to two-band models with nearest neighbor hopping. However, this limitation can be somewhat relaxed considering that, by its intrinsic nature, the microscopic Hamiltonian of every Weyl semimetal can be accurately approximated as a two-band, low-energy effective Hamiltonian, which is obtained by expanding the microscopic Hamiltonian up to second order of momenta near Weyl nodes. By performing k → 1 a sin ka and k 2 → 2 a 2 (1 − cos ka), one can then construct a minimally lattice-regularized two-band Hamiltonian with nearest neighbor hopping. Provided that the connectivity of surface Fermi arcs is well captured by such a minimally lattice-regularized Hamiltonian, we predict that the locations of surface Fermi arcs (which are the zeroenergy surface states) are predominantly determined by the condition that the Zak phase integrated along the normal direction to the surface is π. This prediction is confirmed to be accurate in various theoretical models. Topological defects of the Wannier-Stark ladder. The Zak phase can reveal the peculiar topological structure of Weyl semimetal directly in the bulk through the WSL emerging under an electric field. Under the adiabatic condition that the electric field is not too strong to cause mixing between different bands, i.e., there is no Zener tunneling, the energy of WSL eigenstates is given as follows [23]: whereĒ α (k ⊥ ) = a 2π dk E α (k) is the one-dimensionally averaged energy of the α-th band, E is the electric-field strength, and n (∈ Z) is the WSL index. γ Zak α (k ⊥ ) is the same as the above Zak phase except that, here, k ⊥ and k are the momenta perpendicular and parallel to the electric field, respectively. To demonstrate concretely how the Zak phase reveals the peculiar topological structure of Weyl semimetal, let us consider the model Hamiltonian proposed by Yang et al. [2], which describes a time-reversal symmetry-broken Weyl semimetal: which has two Weyl nodes at k = (±k 0 , 0, 0). From this forward, all momenta are denoted in units of 1/a unless stated otherwise. Figure 1 (a) shows the zero-energy momentum spectrum of surface states residing in a y-axis-cut surface, where k x and k z are good quantum numbers. As one can see, there exists a surface Fermi arc connecting two surface-projected Weyl nodes at (k x , k z ) = (±k 0 , 0). It was predicted in the previous section that the locations of surface Fermi arcs are predominantly determined by the condition that the Zak phase integrated along the normal direction to the surface, which is the y-direction here, is π. Figure 1 (b) shows that this prediction is accurate. More importantly, each projected Weyl node creates a screw dislocation in the Zak phase. See Fig. 1 (c) for the 3D plot of the Zak phase. Such a screw dislocation in the Zak phase manifests itself as a topological defect of the WSL. Figure 1 (d) shows the zero-energy momentum spectrum of WSL eigenstates generated from the valence band, which is obtained via the adiabatic formula in Eq. (2). Specifically, in Fig. 1 (d), we plot the following spectral function at ω = 0: which exhibits various spectral peaks following the trajectory of ω = E WSL α,n (k ⊥ ). Note that E WSL α,n (k ⊥ ) becomes multi-valued if there is a screw dislocation in the Zak phase. Consequently, the zero-energy momentum spectrum of WSL eigenstates can show, in addition to many closed loops, an open line segment connecting two projected Weyl nodes with opposite chiralities similar to the surface Fermi arc. We call this open line segment the bulk Fermi arc. In fact, the conduction band generates a similar bulk Fermi arc (as well as other closed-loop WSL eigenstates), which, incidentally, is exactly overlapped with 2) and (4). Here, the electric field is applied along the y-direction with its strength set equal to eaE/t = 0.25. Note that the momentum spectrum of WSL eigenstates is periodic in energy with period of eaE. the valence-band counterpart at zero energy in the above model Hamiltonian. Fortunately, it turns out that the bulk Fermi arc remains robust despite mixing between WSL eigenstates generated from both valence and conduction bands. In other words, the bulk Fermi arc can persist even beyond the strictly valid regime of the adiabatic condition, i.e., eaE/t 1. To confirm this, we compute the momentum spectrum of WSL eigenstates by directly diagonalizing the microscopic model Hamiltonian under an electric field. Specifically, we compute the following spectral function, which is constructed in terms of the exact eigenstates of the microscopic Hamiltonian in the presence of electrostatic potential: whereH(k ⊥ ) is obtained by Fourier-transforming the microscopic model Hamiltonian with respect to k y ; Evolution of the zero-energy momentum spectrum of WSL eigenstates as a function of electricfield strength. Specifically, the electric-field strengths are set equal to eaE/t = 0.25, 0.5, 0.75, 1 at Panels (a)-(d), respectively. Here, WSL eigenstates are obtained by directly diagonalizing the microscopic model Hamiltonian under electric fields. [H(k ⊥ )] ny,n y = 1 2π dk y H(k)e ikya(ny−n y ) , where n y is the layer index along the y-direction. Note that the trace Tr is taken over both n y and pseudospin index. The electrostatic potential term is given by V = eaE(n y − N y /2)δ ny,n y , where the electrostatic potential is set to be zero at the middle of the system. Figure 2 shows various zero-energy cuts of the above spectral function as a function of electric-field strength. In particular, Fig. 2 (a) is computed at the same electricfield strength as Fig. 1 (d). As one can see, the two figures are essentially identical, showing that the adiabatic formula provides an excellent approximation to the exact results, at least at this range of electric-field strengths. Fig. 2 show that the bulk Fermi arc persists up to reasonably strong electric fields. The model Hamiltonian in Eq. (3) provides a convenient platform to study various topological properties of Weyl semimetal. The applicability of this model, however, is somewhat limited since it requires a breaking of the time-reversal symmetry. Another pathway to generate Weyl semimetal is to break the inversion symmetry while preserving the time-reversal symmetry, which may be more relevant in view of recent experimental confirmations of Weyl semimetal in TaAs [6][7][8][9][10][11][12]. In this work, we focus on the tight-binding model Hamiltonian proposed by Ojanen [5], which describes a time-reversal invariant Weyl semimetal: where d 1 (k) = t(1 + cos k · a 1 + cos k · a 2 + cos k · a 3 ), d 2 (k) = t(sin k · a 1 + sin k · a 2 + sin k · a 3 ), and D x (k) = λ[sin k · a 2 − sin k · a 3 − sin (k · a 2 − k · a 1 ) + sin (k · a 3 − k · a 1 )] with a 1 = a 2 (0, 1, 1), a 2 = a 2 (1, 0, 1), and a 3 = a 2 (1, 1, 0). (Here, we reintroduce the lattice constant a for clarity.) (σ x , σ y , σ z ) and (s x , s y , s z ) are the Pauli matrices acting on the sublattice and spin basis, respectively. The other components, D y (k) and D z (k), are obtained by permuting a i (i = 1, 2, 3) cyclically from the expression of D x (k). The above Hamiltonian has four bands composed of two conduction and two valence bands, among which the middle two bands, i.e., the top valence and bottom conduction bands constitute a Weyl semimetal. Concretely, the Hamiltonian can be decomposed conveniently into two block-diagonalized sublattice-basis Hamiltonians, H ± σ , by first diagonalizing the spin-basis part of the Hamiltonian, H s = α D α s α : where We check if such a time-reversal invariant Weyl semimetal can be also characterized by the existence of bulk Fermi arcs. To this end, it is important to realize that the band structure of time-reversal invariant Weyl semimetal is generally more complicated than that of the time-reversal symmetry-broken counterpart due to various band crossings. In the above Hamiltonian, it turns out that there are crossings between the top and bottom valence/conduction bands. In this situation, the WSL eigenenergy cannot be simply given by the adiabatic formula in Eq. (2), but rather obtained as an eigenvalue solution of the so-called "requantized" non-Abelian semiclassical Hamiltonian (NASH) [23]: where [Ê] αβ = δ αβ E α is the energy dispersion and [Â] αβ = φ α |i∇ k |φ β is the Berry connection with a non-Abelian structure [26][27][28][29][30][31]. Here, α and β denote the indices of all bands that cross each other. Note that the NASH eigenvalue equation can be exactly solved by the adiabatic formula if all off-diagonal elements of the non-Abelian Berry connection are set equal to zero [23]. See Methods for details on how to diagonalize the NASH efficiently to obtain the spectral function of WSL eigenstates. Figure 3 (a) shows the zero-energy momentum spectrum of y-axis-cut surface states, which exhibits multiple surface Fermi arcs. Considering that the blockdiagonalized Hamiltonian of the middle two bands can be regarded as essentially a lattice-regularized Hamiltonian of Weyl semimetal containing all Weyl nodes, it is natural to predict that the connectivity of the above surface Fermi arcs is determined by the π Zak-phase condition, where the Zak phase is obtained by integrating the Berry connection of the top valence (or bottom conduction) band along the y-axis. Figure 3 (b) shows that this prediction is indeed true with excellent accuracy. More importantly, Figure 3 (c) shows that each and every projected Weyl node creates a topological defect of the WSL. Specifically, see the magnified views of the boxed region [ Fig. 3 (e)-(g)], which contains only two projected Weyl nodes. As one can see, there exists an edge dislocation exactly at each and every projected Weyl node. While appearing as three-way crossings at special energy cuts, e.g., in Fig. 3 (f), topological defects of the WSL are generically end points of an open line segment, which is nothing but the bulk Fermi arc. Below, we provide a heuristic explanation for the formation of these bulk Fermi arcs as well as the previous one in Fig. 1. Before doing so, it is important to mention that the above sharp structure of topological defects gets softened in the presence of mixing between WSL eigenstates generated from all four bands including the top/bottom valence/conduction bands. Fortunately, even with this mixing, the peculiar topological structure of Weyl semimetal is still clearly visible as a misalignment of WSL eigenstates near projected Weyl nodes. See the boxed region in Fig. 3 Fig. 3 (c). Figure 4 provides a heuristic explanation for the formation of bulk Fermi arcs. For simplicity, we first discuss the adiabatic situation described by Eq. (2), assuming that the Zak phase plays a deciding role in determining the topology of WSL eigenstates. The Zak phase winds by 2π either counter-clockwise or clockwise around each projected Weyl node. This means that the WSL eigenstates with two different indices n and n + 1 can be smoothly fused together encircling a projected Weyl node. Such a fusion can cause three-way crossings of WSL eigenstates, creating topological defects of the WSL. At general energy cuts, these three-way crossings get split in such a way that bulk Fermi arcs are formed. As mentioned above, while softened, this structure of topological defects remains intact even in non-adiabatic situations, where mixings are allowed between WSL eigenstates generated from different bands. (d) in comparison with that in Weyl fermion number conservation under an electric field. We argue that the existence of bulk Fermi arcs is actually required to conserve the Weyl fermion number under an electric field. It was shown by Nielsen and Ninomiya [13] that the chiral anomaly of Weyl fermion can be resolved by considering Weyl fermions in a crystal, or in a lattice-regularized theory. Specifically, when parallel electric and magnetic fields are applied along the line connecting two Weyl nodes, the displacement of the Fermi surface (i.e., the Weyl fermion creation/annihilation) in one Weyl node is exactly compensated by that in the other since both Fermi surfaces are interconnected below through a one-dimensional bulk conduction channel composed of filled states, therefore conserving the Weyl fermion number. Now, let us imagine what happens when parallel electric and magnetic fields are applied perpendicular to the Weyl-node connecting line. In this situation, the Fermi surfaces of two Weyl nodes are not interconnected through a single one-dimensional bulk conduction channel. To conserve the Weyl fermion number, additional conduction channels are necessary, which are provided by none other than surface Fermi arcs. Eventually, the whole conduction process forms a closed circuit composed of two surface Fermi arcs in both sides of the surface and two one-dimensional bulk conduction channels, via which Weyl fermions can travel freely through the bulk between two surface-projected copies of each Weyl node. Note that this conduction process has been predicted to cause k x k z n y = 1 n y = 3 n y = 5 n y = 7 n y = 9 n y = 11 FIG. 5. Layer-by-layer constant-energy momentum spectra of WSL eigenstates showing the evolution from a surface to a bulk Fermi arc. Here, we analyze the model Hamiltonian in Eq. (3) under an electric field with y-axis-cut surfaces. ny denotes the layer index measured from a y-axis-cut surface at ny = 1. The electric field is applied along the y-direction with its strength set equal to eaE/t = 0.5. Other model parameters are the same as those in Fig. 1. As one can see, there is a clear surface Fermi arc at ny = 1, which is slightly deformed from that in the absence of electric field in Fig. 1 (a) an intriguing quantum oscillation in Weyl semimetal [32][33][34]. There is, however, a hidden problem when this argument is applied to the situation with finite electric fields. Under any finite electric fields, the surface Fermi arc in one side is energetically far separated from that in the opposite (provided that the system is macroscopically large). This means that the whole conduction process cannot form a closed circuit at the same energy level. A resolution of this problem is that there exist many bulk Fermi arcs in conjunction with the surface counterparts, which form a chain of many closed circuits, eventually connecting both sides of the surface. Below, we provide explicit numerical evidence supporting this argument. Figure 5 shows layer-by-layer constant-energy momentum spectra of WSL eigenstates in the model Hamiltonian in Eq. (3) under an electric field with y-axis-cut surfaces. Specifically, we compute the following spectral function: where n y is the layer index and the trace Tr is taken over only the pseudospin index.H is the same asH in Eq. (5) except that, here, a y-axis-cut surface is located at n y = 1. It is important to note that the surface Fermi arc is joined with a partner Fermi arc at two projected Weyl nodes, forming a closed circuit together. This partner Fermi arc is the first in a series of many bulk Fermi arcs forming the periodic structure of the WSL. One may ask how the connectivity of surface Fermi arcs evolves into that of bulk Fermi arcs. In the above example, the two connectivities happen to be the same, but in general can be very different, as seen in Fig. 4. As explained previously, the connectivity of surface Fermi arcs is predominantly determined by the π Zak-phase condition, while that of bulk Fermi arcs is determined by a delicate interplay between the Zak phase and the band dispersion. Discussion In this work, we have shown that Weyl nodes, which are responsible for the peculiar topological structure of Weyl semimetal, can be directly visualized as topological defects of the WSL emerging under an electric field. This opens up the possibility of a novel spectroscopic method to characterize Weyl semimetal. Below, we discuss briefly how this method can be realized in experiments. So far, the WSL has been observed only in artificial structures such as semiconductor superlattices [35,36] and optical lattices [37] due to the fact that the lattice spacing in a natural crystal is usually too small that a strong electric field is necessary to generate sufficiently well-developed WSL spectral lines; for typical experimental situations, the necessary electric-field strength is estimated to be around the order of 100 kV/cm [23]. To overcome this obstacle, there may be two possible strategies; (i) constructing a Weyl semimetal with a large lattice spacing, or (ii) applying a strong electric field without damaging the sample. For the first strategy, it has been proposed [38,39] that a Weyl semimetal can be constructed in a super-lattice system composed of alternating layers of threedimensional topological insulators and ordinary insulators. Meanwhile, there has been a recent outburst of various proposals for constructing Weyl semimetals in optical lattice systems with cold atoms [40][41][42][43][44][45]. Our method can be particularly useful for such cold-atom Weyl semimetals in optical lattice systems, which are known to suffer from various detection issues; (i) edges/surfaces are not well defined [46], and (ii) transport measurements are limited, or have different characteristics from those in condensed matter systems [47]. Our method, which detects a bulk property in a non-transport measurement, could be an ideal alternative. For the second strategy, various pump-probe techniques can be useful since a strong electric field can be applied in the form of pulse or radiation without damaging the sample [48,49]. Methods Here, we discuss how to diagonalize the NASH efficiently to obtain the spectral function of WSL eigenstates. One method is to Fourier-transform the NASH from the momentum to the real space, which involves Fourier-transforming both energy dispersionÊ(k) and non-Abelian Berry connectionÂ(k) [23]. Unfortunately, this method turns out to be inefficient in Weyl semimetal due to a slow convergence of the truncation error for higher-order Fourier components. A more efficient alternative is to rewrite the differential operator i∇ k in a discrete momentum representation, which is convenient for numerical diagonalization. To this end, it is important to note that i∇ k is in fact the position operatorR, which is represented as a matrix in the momentum space as follows: From this forward, let us focus on position and momentum components parallel to the electric field, which are denoted as R and k , respectively. Next, we note the following representation of the delta function by using the so-called Dirichlet kernel: where k = k − k . Motivated by this equality, we replace the delta function by its discrete version: δ disc (k ) = 1 2N + 1 sin (k (N + 1/2)) sin (k /2) , where k = 2πj/(2N +1) with j = −N, −N +1, · · · , N − 1, N . This leads to a matrix representation of the position operatorR in the discrete momentum space: This representation may seem natural in a slightly different, but more physical perspective; what we have done is basically equivalent to Fourier-transforming the position operator from the real lattice space with a finite length L = 2N +1 to the discrete parallel momentum space with k = 2πj/L with j = −N, −N + 1, · · · , N − 1, N . It is important to note thatÊ(k) is a simple diagonal matrix with respect to both band and discrete parallel momentum indices. On the other hand,Â(k) is a 2 × 2 matrix with generally non-zero off-diagonal elements with respect to the band index, while being an N ×N diagonal matrix with respect to the discrete parallel momentum index. Of course,R is a diagonal matrix with respect to the band index. With the knowledge of all these operators in the above discrete momentum representation, the NASH can be diagonalized to generate WSL eigenstates as a function of perpendicular momentum, k ⊥ . Specifically, we compute the following spectral function of WSL eigenstates obtained from the NASH: where the trace Tr is taken over both band and discrete parallel momentum indices.
5,985
2016-08-19T00:00:00.000
[ "Physics" ]
Increased ionization supports growth of aerosols into cloud condensation nuclei Ions produced by cosmic rays have been thought to influence aerosols and clouds. In this study, the effect of ionization on the growth of aerosols into cloud condensation nuclei is investigated theoretically and experimentally. We show that the mass-flux of small ions can constitute an important addition to the growth caused by condensation of neutral molecules. Under atmospheric conditions the growth from ions can constitute several percent of the neutral growth. We performed experimental studies which quantify the effect of ions on the growth of aerosols between nucleation and sizes >20 nm and find good agreement with theory. Ion-induced condensation should be of importance not just in Earth’s present day atmosphere for the growth of aerosols into cloud condensation nuclei under pristine marine conditions, but also under elevated atmospheric ionization caused by increased supernova activity. C louds are a fundamental part of the terrestrial energy budget, and any process that can cause systematic changes in cloud micro-physics is of general interest. To form a cloud droplet, water vapor needs to condense to aerosols acting as cloud condensation nuclei (CCN) of sizes of at least 50-100 nm 1 , and changes in the number of CCN will influence the cloud microphysics 2, 3 . One process that has been pursued is driven by ionization caused by cosmic rays, which has been suggested to be of importance by influencing the density of CCN in the atmosphere and thereby Earth's cloud cover [4][5][6][7] . Support for this idea came from experiments, which demonstrated that ions significantly amplify the nucleation rate of small aerosols (≈1.7 nm) 8,9 . However, to affect cloud properties, any change in small aerosols needs to propagate to CCN sizes 50-100 nm, but such changes were subsequently found by numerical modeling to be too small to affect clouds 3,10,11 . The proposed explanation for this deficit is that additional aerosols reduce the concentration of the gases from which the particles grow, and a slower growth increases the probability of smaller aerosols being lost to preexisting aerosols. This has lead to the conclusion that no significant link between cosmic rays and clouds exists in Earth's atmosphere. This conclusion stands in stark contrast to a recent experiment demonstrating that when excess ions are present in the experimental volume, all extra nucleated aerosols can grow to CCN sizes 12 . But without excess ions in the experimental volume, any extra small aerosols (3 nm) are lost before reaching CCN sizes, in accordance with the above mentioned model results. The conjecture was that an unknown mechanism is operating, whereby ions facilitate the growth and formation of CCN. Additional evidence comes from atmospheric observations of sudden decreases in cosmic rays during solar eruptions in which a subsequent response is observed in aerosols and clouds 6,7 . Again, this is in agreement with a mechanism by which a change in ionization translates into a change in CCN number density. However, the nature of this micro-physical link has been elusive. In this work we demonstrate, theoretically and experimentally, the presence of an ion mechanism, relevant under atmospheric conditions, where variations in the ion density enhance the growth rate from condensation nuclei (≈1.7 nm) to CCN. It is found that an increase in ionization results in a faster aerosol growth, which lowers the probability for the growing aerosol to be lost to existing particles, and more aerosols can survive to CCN sizes. It is argued that the mechanism is significant under present atmospheric conditions and even more so during prehistoric elevated ionization caused by a nearby supernova. The mechanism could therefore be a natural explanation for the observed correlations between past climate variations and cosmic rays, modulated by either solar activity [13][14][15][16][17] or caused by supernova activity in the solar neighborhood on very long time scales where the mechanism will be of profound importance [18][19][20] . Results Theoretical model and predictions. Cosmic rays are the main producers of ions in Earth's lower atmosphere 21 . These ions interact with the existing aerosols, and charge a fraction of them. However, this fraction of charged aerosols is independent of the ionization rate in steady state-even though the electrostatic interactions enhance the interactions among the charged aerosols and between these aerosols and neutral molecules, the increased recombination ensures that the equilibrium aerosol charged fraction remains the same 22 . Ion-induced nucleation will cause the small nucleated aerosols to be more frequently charged relative to an equilibrium charge distribution, but ion recombination will move the distribution towards charge equlibrium, typically before the aerosols reach~4 nm 23 . Changing the ionization is therefore not expected to have an influence on the number of CCN through Coulomb interactions between aerosols. However, this argument disregards that the frequency of interactions between ions and aerosols is a function of the ion density, and that each time an ion condenses onto an aerosol, a small mass (m ion ) is added to the aerosol. As a result, a change in ion density has a small but important effect on the aerosol growth rate, since the mass flux from the ions to the aerosols increases with the ion density. This mass flux is normally neglected when compared to the mass flux of neutral molecules (for example sulfuric acid, SA) to the aerosols by condensation growth, as can be seen from the following simple estimate: the typical ion concentration in the atmosphere is on the order of ≈10 3 ions cm −3 , however, the condensing vapor concentration (SA) is typically on the order of ≈10 6 molecules cm −3 . The ratio between them is 10 −3 , from which one might conclude that the effect of ions on the aerosol growth is negligible. Why this is not always the case will now be shown. The mass flux to neutral aerosols consists not only of the condensation of neutral molecules, but also of two terms which add mass due to recombination of a positive (negative) ion and a negative (positive) aerosol. Furthermore, as an ion charges a neutral aerosol, the ion adds m ion to its mass. Explicitly, taking the above mentioned flux of ion mass into account, the growth of aerosols by condensation of a neutral gas and singly charged ions becomes, ∂N i ðr;tÞ ∂t ¼ À P j ∂ ∂r I i;j ðr; tÞN j ðr; tÞ; with i and j = (0, +, −) referring to neutral, positively, and negatively charged particles. Here r and t are the radius of the aerosol and the time. N i = (N 0 , N + , N − ) is the number density of neutral, positive, and negative aerosols. n 0 is the concentration of condensible gas, n + , n − are the concentration of positive and negative ions, while A i = (m i /4πr 2 ρ), with m i being the mass of the neutral gas molecule (i = 0), and the average mass of positive/ negative ions, i = (+, −), ρ is the mass density of condensed gas, and β is the interaction coefficient between the molecules (or ions) and neutral and/or charged aerosols (See Methods for details on derivation of the equations, the interaction coefficients, details of the experiment, and the (m ion /m 0 ) of 2.25). β 00 , β +0 , and β −0 correspond to the interaction coefficients describing the interaction between neutral aerosols of radius r and neutral molecules, positive ions and negative ions respectively, whereas β 0+ , and β 0− are the interaction coefficients between neutral molecules and positively/negatively charged aerosols. Finally β +− corresponds to the recombination between a positive ion and a negative aerosol of radius r, and vice versa for β −+24 . If no ions are present, the above equations simplify to the well known condensation equation 25 , where is the growth rate of the aerosol radius due to the condensation of molecules onto the aerosols. It is the change in growth rate caused by ions that is of interest here. By assuming a steady state for the interactions between ions and aerosols, we find 22 which using N tot = N 0 + N + + N − gives N 0 ðr; tÞ N tot ðr; tÞ Equations (3) and (4) can be inserted into the components of Eq. (1) (for i = (0, +, −)). Assuming symmetry between the positive and negative charges, i.e., m ion ≡ m + = m − , β ±0 ≡ β −0 = β +0 , β ±∓ ≡ β +− = β −+ , and n ion ≡ n + = n − , finally leads to (See Methods for details on derivation of the equations, the interaction coefficients, details of the experiment, and the (m ion /m 0 ) of 2.25): where The 1 term appearing in Eq. (5) is the result of the approximation (1 + 2(β 0± β ±0 )/(β ±∓ β 00 ))/(1 + 2β ±0 /β ±∓ ) ≈ 1, good to 3 × 10 −4 for a 10 nm aerosol and decreasing for d > 10 nm. The bracketed term in Eq. (5) is related to the rate of change in the aerosol radius This growth rate is one of the characteristic equations describing aerosol evolution, and it is valid independent of any losses 26 . It is Γ, in Eq. (6), which quantifies the net effect of ion condensation. The term 4(β ±0 /β 00 )(N 0 /N tot ) depends on electrostatic interactions, and where (n ion /n 0 ) and (m ion /m 0 ) depend on the specific concentrations and parameters. Figure 1a portrays this part together with (β ±0 /β 00 ) and (N 0 /N tot ). Figure 1b depicts the size of Γ in % of the neutral condensation, as a function of the ionization rate q and diameter d of the aerosols for an average atmospheric sulfuric acid concentration of n 0 ≈ 1 × 10 6 molecules cm −3 and m 0 = 100 AMU and a mass ratio (m ion /m 0 ) of 2.25 (See Methods for details on derivation of the equations, the interaction coefficients, details of the experiment, and the (m ion /m 0 ) of 2.25.). It should be noted that the terms β ±0 and β 00 also depend on the mass and diameter of the ions and neutral molecules, which may vary depending on composition. Both exact masses and the mass asymmetry between ions can vary-observationally positive ions tend to be heavier than negative ions 27 . There are additional caveats to the theory, which will be examined in Discussion section. Experimental results. We now proceed to show that the predictions of the theory of ion-induced condensation outlined above can be measured in experiments. The latter were done in an 8 m 3 stainless steel reaction chamber 12 . Due to wall losses, the growth rate of the aerosols could not be too slow, therefore the sulfuric acid concentration needed to be larger than n 0 ≈ 2 × 10 7 molecules cm −3 . This decreases the effect that ionization has on the aerosol growth by more than an order of magnitude when compared to typical atmospheric values. It is however a necessary constraint given the finite size of the chamber. The number of nucleated particles had to be low enough that coagulation was unimportant, thus keeping the growth fronts in size-space relatively sharp, allowing accurate growth rate measurements. The ionization in the chamber could be varied from 16 to 212 ion pairs cm −3 s −1 using two γ-sources. At maximum ionization, the nucleation rate of aerosols was increased by~30% over the minimum ionization. The experiments were performed with a constant UV photolytic production of sulfuric acid, and every 4 h (in some cases 2) the ionization was changed from one extreme to the next, giving a cycle period P of 8 h (or 4) (See Methods for details on derivation of the equations, the interaction coefficients, details of the experiment, and the (m ion /m 0 ) of 2.25.). The effect of ioninduced nucleation during the part of the cycle with maximum ionization results in an increased formation of new aerosols (Fig. 2a). To improve the statistics, the cycle P was repeated up to 99 times. A total of 11 experimental runs were performed, representing 3100 h. Each data set was subsequently superposed over the period P resulting in a statistically averaged cycle. An example of a superposed cycle can be seen in Fig. 2b), where locations of the transition regions between the low and high aerosol density data can be used to extract the effect of ions on aerosols growth. The two transitions determine two trajectories, profile 1 and profile 2, in the (d, t)-plane, from which it is possible to estimate the difference in the growth time to a particular size d (See Methods for details on derivation of the equations, the interaction coefficients, details of the experiment, and the (m ion / m 0 ) of 2.25.). A CI API-ToF mass spectrometer was used to measure the sulfuric acid concentration during some of the experiments and to estimate the average ion mass 28 . The above theory predicts a difference in the time it takes the two profiles to reach a size r due to a growth velocity difference caused by ion condensation. The time it takes for aerosols to grow 6)) in %, in an atmosphere with a condensible gas concentration of 1 × 10 6 molecules cm −3 as a function of aerosol diameter d and ionization rate q (left hand axis) or ion density (right hand axis). The contour lines show the relative size of the growth due to ion condensation in % of the usual condensation growth. The mass ratio (m ion /m 0 ) is set to 2.25, and the mass of the neutral molecule is set to 100 AMU to size r along the two possible profiles is expressed as where t 1 and t 2 refers to the time it takes profiles 1 and 2 to reach size r. The integrand is given by Eq. (7) and it considers that after half the period, the γ-sources are switched off (or on). The above equations can be integrated numerically to find ΔT = t 2 (r) − t 1 (r) and allow comparison with the experiments. During the first~12 nm of growth, profile 1 grows with the γsources on and it thus grows faster than profile 2 in the γ-off region, consequently, t 1 (r) < t 2 (r) and ΔT is increasing (Fig. 2b). This increase is due to the (nearly) constant difference in growth rate between the two profiles. But when profile 1 enters the second part of the cycle, when the γ-sources are off, profile 2 enters the high ion state and is now growing faster than profile 1. Therefore, it is now profile 2 that grows faster and ΔT starts to decrease. Figure 3 depicts three examples of ΔT as a function of the diameter d. It is seen that the data scatter around the theoretical curves (red (γ-on) and blue (γ-off)) obtained from Eqs. (7) and (8). The gray curves were produced by performing a LOESS (locally weighted smoothing) smoothing of the experimental data. It also indicates that the enhanced growth is continuing up to at least 20 nm, and in good agreement with theory. Note that although some of the experiments contain size distribution data above 20 nm, the profiles at those sizes become poorly defined at which point we stop the analysis. All 11 experimental runs are summarized in Fig. 4, where ΔT is averaged between 6 and 12 nm, and shown as a function of the SA concentration, which is obtained from either CI-API-ToF measurements and/or slopes of the growth profiles. The red curve is the theoretical expectation for the γ-sources at maximum, and the blue curve is obtained with a 45% reduction in the ion density. Both are found by numerically solving Eqs. (7) and (8). The relative importance of ion condensation increases as the SA concentration is lowered, as predicted and in good agreement with theory. Discussion The most common effect of ions considered in aerosol models is aerosol charging which increases the interaction between the charged aerosols and neutral aerosols/molecules, thereby increasing aerosol growth. However, as mentioned previously, the ion density does not affect the steady state fraction of aerosols that are charged such that the ion-induced interactions remain nearly constant, implying that no effect on the aerosol growth is expected by changing the background ionization. Nonetheless, experiments and observations do suggest that ions have an effect on the formation of CCN, the question has therefore been, how is this possible? The present work demonstrates that the mass flux associated with the aerosol charging by ions and ion-aerosol recombination is important and should not be neglected. Γ in Eq. (7) contains the effect of the mass-flux of ions to aerosols and demonstrates the inherent amplifications by the interaction between the ions and aerosols. This function Γ shows that the initial estimate of the mass-flux, (n ion /n 0 ) = 10 −3 , made in the introduction, gets multiplied by the size-dependent function 4 β ± 0 =β 00 À Á mion m0 N 0 =N tot ð Þ which at maximum is about 60 m ion =m 0 $ 2:25 ð Þ , and therefore nearly two orders of magnitude larger, than the naive estimate. The simple expression for the growth rate, Eq. (7), can conveniently be used as a parametrization in global aerosol models. As a test of the theoretical model, extensive experiments were performed to study the effect on growth of the flux of ion-mass to the aerosols. One complication in the experiments was that aerosols were lost to the walls of the chamber. This meant that the concentration of SA could not be as low as the typical values in the atmosphere~10 6 molecules cm −3 , but had to be higher thañ 2 × 10 7 molecules cm −3 . Therefore, the relative effect on the growth caused by the ions was more than an order of magnitude smaller, as can be seen from Eq. (7). The experimental challenge was therefore to measure a <1% change in growth rate, which was done by cyclic repeating the experiments up to 99 times and average the results in order to minimize the fluctuations, with a total of 3100 h of experiments. Figures 3 and 4 demonstrate both the importance of varying the neutral SA gas concentration and the effect of changing the ion density, and show excellent agreement with the theoretical expectations. One important feature is that the effect on the growth rate continues up to~20 nm, as can be seen in Fig. 3, which is larger sizes than predicted for charged aerosols interacting with neutral molecules [29][30][31] , and is expected to increase for atmospherically relevant concentrations of SA. It should be noted that the early stages of growth are very important since the smallest aerosols are the most vulnerable to scavenging by large pre-existing aerosols, and by reaching larger sizes~20 nm faster, the survivability increases fast. The presented theory is an approximation to a complex problem, and a number of simplifications have been made which gives rise to some questions. We will now discuss the most ). Note that profile 1 (profile 2) is initially growing with γ-on (γ-off) until d ≈ 13 nm. However when d > 13 nm profile 1 (profile 2) grows with γoff (γ-on). It is the difference in timing of profile 1 and 2 that contain information about the effect of ions on the growth rate pertinent: Will the material that constitute the ions condense onto the aerosols in any case as neutral molecules? This will certainly be the case for the negative HSO À 4 ions. Assuming that all negative ions, n − , are HSO À 4 , then the number of neutral SA molecules would be n 0 − n − , where n − is the total negative ion density. Inserting values in the right hand side of Eq. (7), for example for the present experiment n 0~1 0 7 molecules cm −3 , and n −~1 0 4 ions cm −3 the correction to the growth rate from the decrease in neutral molecules is, Δðdr=dtÞ=ðdr=dtÞ j j n 0 À n À ð ÞÀn 0 ð Þ =n 0 j j <10 À3 , but the ion condensation impact on the growth rate is of the order 10 −2 (Fig. 4) and therefore an order of magnitude smaller. So even if the neutral molecules would condense eventually, it does not change the estimated growth rate by ion condensation significantly. This would also be the case under atmospheric conditions, where n 0 is of the order 10 6 cm −3 and n ion~1 0 3 ions cm −3 , again a correction an order of magnitude lower than the ion condensation effect. Also note that the mass-flux from ions is larger than from the neutral molecules, which is part of the faster growth rate. In fact, even if the larger particles grow slightly slower due to a decrease in neutral molecules, the growth rate of the smaller particles is enhanced due to the ion interactions, which make the cross-section of the small particles larger (Fig. 5). This leads to the second question: Will the ion-mass that condenses onto the small aerosols stay in the aerosol and not evaporate after the aerosol is neutralized? This is slightly more difficult to answer, since the composition of all the . a Experimental run V9 (Fig. 4) Interaction coefficients. The interaction coefficients between a small neutral particle of mass 100 AMU and a small ion of mass 225 AMU interacting with aerosols of diameter d. The interaction between neutral particles, β 00 , is given by the blue curve, the interaction between small neutral particles and charged aerosols, β 0± , is given by the red curve. The interaction between a positive or negative ion and neutral aerosols, β ±0 , is described with the yellow curve. Finally, the recombination coefficient between two oppositely charged particles is given by the brown curve. The coefficients were calculated assuming Brownian diffusion while including Van der Waals-forces, Coulomb-forces (including image charges) and viscous forces 24 . Symmetry between positive and negative ions has been assumed, see text ions are not known. The abundant terminal negative HSO À 4 ions are not more likely to evaporate than the neutral SA molecules. With respect to unknown positive or negative ions the possibility of evaporation is more uncertain. If the material of some of the ions are prone to evaporate more readily, it would of course diminish the ion effect. The present experimental conditions did not indicate that this was a serious problem, but in an atmosphere of e.g. more volatile organics it could be. Another issue is that sulfate ions typically carry more water than their neutral counterparts 32 , and it is uncertain what happens with this excess water after neutralization of the aerosol. It was also assumed that the ion density was in steady state with the aerosol density at all times. This is of course an approximation, but from measurements of the ion density with a Gerdien tube 33 the typical time scale for reaching steady state is minutes and the assumption of an ion density in steady state is thus a reasonable approximation 12 . It is worth noting that in the experiments two types of losses for ions are present, in addition to recombination: Wall losses and condensation sink to aerosols. Based on the loss rate of sulfuric acid the wall loss rate is about 7 × 10 −4 s −1 , while the condensation sink for experiment V2 was 1.2 × 10 −4 s −1 . This means that the wall losses were dominant and changes in the aerosol population will thus have a minimal influence on the ion concentration. Furthermore recombination is by far the dominant loss mechanism for ions. For an ion production rate of 16 cm −3 s −1 , the actual ion concentration is 92% of what a calculation based only on recombination gives-for larger ion production the recombination becomes more dominant and vice versa. Under atmospheric conditions of high condensation sink and low ion production this may constitute a significant decrease to the effect due to the reduced ion concentration, but under clean conditions and in the experiment the condensation sink has an minor effect. In order to calculate the interaction coefficients between ions and aerosols it is necessary to know the mass of the ions and mass of the aerosols. This is complex due to the many ion species and their water content, and as a simplification an average ion mass was chosen to be 225 AMU. The sensitivity of the theory to changes in ion mass in the range (130-300 AMU) and mass of a neutral SA molecule in the range (100-130) could change the important ratio (β ±0 /β 00 ) by up to 20%. The possible relevance of the presented theory in Earth's atmosphere will now be discussed. From Eq. (6), the factor (n ion / n 0 ) indicates that the relative importance of ion condensation will be largest when the concentration of condensing gas n 0 is small and the ion density is large. Secondly, the number density of aerosols should also be small so the majority of ions are not located on aerosols. This points to pristine marine settings over the oceans, away from continental and polluted areas. Results based on airborne measurements suggest that the free troposphere is a major source of CCN for the Pacific boundary layer, where nucleation of new aerosols in clean cloud processed air in the Inter-Tropical Convergence Zone are carried aloft with the Hadley circulation and via long tele-connections distributed over ± 30°latitude 34,35 . In these flight measurements, the typical growth rate of aerosols was estimated to be of the order~0.4 nm h −135 , which implies an average low gas concentration of condensing gas of n 0~4 × 10 6 molecules cm −3 . Measurements and simulations of SA concentration in the free troposphere annually averaged over day and night is of the order n 0~1 0 6 molecules cm −336 . This may well be consistent with the above slightly larger estimate, since the aerosol cross-section for scavenging smaller aerosols increases with size, which adds to the growth rate. Secondly, the observations suggest that as the aerosols enters the marine boundary layer, some of the aerosols are further grown to CCN sizes 35 . Since the effect of ion condensation scales inversely with n 0 , a concentration of n 0~4 × 10 6 molecules cm −3 would diminish the effect by a factor of four. As can be seen in Fig. 1b, the effect of ion condensation for an ionization rate of q = 10 ion pairs cm −3 s −1 would change from 10 to 2.5% which may still be important. Note that other gases than sulfuric acid can contribute to n 0 in the atmosphere. As aerosols are transported in the Hadley circulation, they are moved in to the higher part of the troposphere, where the intensity and variation in cosmic rays ionization are the largest 37 . This suggests that there are vast regions where conditions are such that the proposed mechanism could be important, i.e., where aerosols are nucleated in Inter-Tropical Convergence Zone and moved to regions where relative large variations ionization can be found. Here the aerosols could grow faster under the influence of ion condensation, and the perturbed growth rate will influence the survivability of the aerosols and thereby the resulting CCN density. Finally the aerosols are brought down and entrained into the marine boundary layer, where clouds properties are sensitive to the CCN density 2 . Although the above is on its own speculative, there are observations to further support the idea. On rare occasions the Sun ejects solar plasma (coronal mass ejections) that may pass Earth, with the effect that the cosmic ray flux decreases suddenly and stays low for a week or two. Such events, with a significant reduction in the cosmic rays flux, are called Forbush decreases, and can be used to test the link between cosmic ray ionization and clouds. A recent comprehensive study identified the strongest Forbush decreases, ranked them according to strength, and disussed some of the controversies that have surrounded this subject 7 . Atmospheric data consisted of three independent cloud satellite data sets and one data set for aerosols. A clear response to the five strongest Forbush decreases was seen in both aerosols and all low cloud data 7 . The global average response time from the change in ionization to the change in clouds was~7 days 7 , consistent with the above growth rate of~0.4 nm h −1 . The five strongest Forbush decreases (with ionization changes comparable to those observed over a solar cycle) exhibited inferred aerosol changes and cloud micro-physics changes of the order~2% 7 . The range of ion production in the atmosphere varies between 2 and 35 ions pairs s −1 cm −337 and from Fig. 1b it can be inferred from that a 20% variation in the ion production can impact the growth rate in the range 1-4% (under the pristine conditions). It is suggested that such changes in the growth rate can explain thẽ 2% changes in clouds and aerosol change observed during Forbush decreases 7 . It should be stressed that there is not just one effect of CCN on clouds, but that the impact will depend on regional differences and cloud types. In regions with a relative high number of CCN the presented effect will be small, in addition the effect on convective clouds and on ice clouds is expected to be negligible. Additional CCNs can even result in fewer clouds 38 . Since the ion condensation effect is largest for low SA concentrations and aerosol densities, the impact is believed to be largest in marine stratus clouds. On astronomical timescales, as the solar system moves through spiral-arms and inter-arm regions of the Galaxy, changes in the cosmic ray flux can be much larger [18][19][20] . Inter-arm regions can have half the present day cosmic ray flux, whereas spiral arm regions should have at least 1.5 times the present day flux. This should correspond to a~10% change in aerosol growth rate, between arm and inter-arm regions. Finally, if a near-Earth supernova occurs, as may have happened between 2 and 3 million years ago 39 , the ionization can increase 100 to 1000 fold depending on its distance to Earth and time since event. Figure 1b shows that the aerosol growth rate in this case increases by more than 50%. Such large changes should have profound impact on CCN concentrations, the formation of clouds and ultimately climate. In conclusion, a mechanism by which ions condense their mass onto small aerosols and thereby increase the growth rate of the aerosols, has been formulated theoretically and shown to be in good agreement with extensive experiments. The mechanism of ion-induced condensation may be relevant in the Earth's atmosphere under pristine conditions, and able to influence the formation of CCN. It is conjectured that this mechanism could be the explanation for the observed correlations between past climate variations and cosmic rays, modulated by either solar activity [13][14][15][16][17] or supernova activity in the solar neighborhood on very long time scales [18][19][20] . The theory of ion-induced condensation should be incorporated into global aerosol models, to fully test the atmospheric implications. Methods Correction to condensation due to ions. Expanding Eq. (1) gives where the indexes 0, +, and − refer to neutral, positively, and negatively charged particles. Here r and t are the radius of the aerosol and the time. N 0 , N + , and N − is the number density of neutral, positive, and negative aerosols. n 0 is the concentration of the condensible gas (usually sulfuric acid in the gas phase), n + and n − are the concentration of positive and negative ions, A 0 = (m 0 /4πr 2 ρ), A + = (m + /4πr 2 ρ), and A − = (m − /4πr 2 ρ), where m 0 is the mass of the neutral gas molecule, m + and m − are the average mass of positive/negative ions, ρ is the mass density of condensing gas, and β the interaction coefficient between the monomers and the neutral and/or charged aerosols. The parameters of the above model are shown in Fig. 5. Using equilibrium between aerosols and ions we have while defining N tot = N 0 + N + + N − gives N 0 ðr; tÞ N tot ðr; tÞ If we further assume symmetry between the positive and negative charges, i.e., that m ion ≡ m + = m − , β ±0 ≡ β −0 = β +0 , β ±∓ ≡ β +− = β −+ as well as n ion ≡ n + = n − , such that and for N tot = N 0 + N + + N − , we obtain N 0 ðr; tÞ N tot ðr; tÞ Using Eq. (12) in Eq. (9) and using the charge symmetry gives Adding the three equations then results in Using N tot as a common factor, we then have Taking β 00 as a common factor and plugging Eq. (13) into the first term gives the expression The above function is equal to 1 + O(10 −2 ), and F is therefore replaced with 1. A simple rearrangement provides the final form where Detailed description of the experimental setup. The experiments were conducted in a cubic 8 m 3 stainless steel reaction chamber used in Svensmark et al. 12 , and shown schematically in Fig. 6. One side of the chamber is made of Teflon foil to allow the transmission of collimated UV light (253.7 nm), that was used for photolysis of ozone to generate sulfuric acid that initiates aerosol nucleation. The chamber was continuously flushed with 20 L min −1 of purified air passing through a humidifier, 5 L min −1 of purified air passing through an ozone generator, and 3.5 mL min −1 of SO 2 (5 ppm in air, AGA). The purified air was supplied by a compressor with a drying unit and a filter with active charcoal and citric acid. The chamber was equipped with gas analyzers for ozone and sulfur dioxide (a Teledyne 400 and Thermo 43 CTL, respectively) and sensors for temperature and Fig. 6 The experimental setup NATURE COMMUNICATIONS | DOI: 10.1038/s41467-017-02082-2 ARTICLE relative humidity. For aerosol measurements, a scanning mobility particle sizing (SMPS) system was used. The system consisted of an electrostatic classifier (TSI model 3080 with a model 3077A Kr-85 neutralizer) using a nano-DMA (TSI model 3085) along with either one of two condensation particle counters (TSI model 3775 or 3776). For some of the experiments, a CI API-ToF 28 using HNO 3 as the ionizing agent was used to measure the sulfuric acid in the chamber. The ionization in the chamber could be increased by two 27 MBq Cs-137 gamma sources placed 0.6 m from opposing sides of the chamber, with the option of putting attenuating lead plates of 0.5, 1.0, and 2.0 cm thickness in front of each source. At full strength the sources increase the ionization in the chamber to 212 ion pairs cm −3 s −1 . Details of the data analysis. A total of 11 experimental runs totaling 3100 h of measurements were made with varying settings. The settings for each of the experiments are shown in Table 1. To detect an eventual difference in growth rate the following method was employed. For each experimental run each size-bin was normalized and then the individual periods were superposed to reduce the noise in the data, as shown in Fig. 2 of the main paper. The superposed data was then used for further analysis. For each size-bin recorded by the SMPS, the number of aerosols relative to the mean number NðdÞ h i¼ 1=T R T 0 N tot d; t′ ð Þdt′ was then plotted-as exemplified in the top curve of Fig. 7. The derivative of this curve, is the rate of change of aerosol density of a given size, is used to determine the temporal position of the profiles 1 and 2. This can be achieved by first calculating the derivative d N tot = NðdÞ h i ð Þ =dt ð Þ 2 À Á , then normalizing with this function's maximum value at diameter d, (the square was used to get a positive definite and sharply defined profile), and then smoothed using a boxcar filter with a width of typically 7-16 min -shown as the lower black curve in Fig. 7. The width of the boxcar filter was typically determined from the requirement that the Gaussian fit converged-for instance, in some cases with low sulfuric acid concentration a longer boxcar filter was used, due to the relatively higher noise. On top of the black curve in Fig. 7, a dashed red and a dashed blue curve are superimposed. These are Gaussian fits to the two maxima. The position of the center of each of the Gaussian profiles gives the growth time relative to the time the γ sources were opened (profile 1) or closed (profile 2). The difference between these growth times then gives the ΔT for each bin size, as shown in Fig. 3. The ΔT values can then be compared with the theoretical expectations. Averaging the individual ΔT values for sizes between 6 and 12 nm finally results in the ΔT shown in Fig. 4. The m ion /m 0 ratio. Table 2 summarizes the average masses (m/q) of a series of runs using the API-ToF without the CI-unit to measure negative ions in order to determine the ratio m ion /m 0 . Note that water evaporates in the API-ToF so the masses measured are lower than the actual masses of the clusters. The ratio of 2.25 for m ion /m 0 used in the calculations would imply that for a dry (0 water) neutral sulfuric acid molecule (98 AMU) m ion should be 220 m/q. The amount of water on a sulfuric acid molecule varies according to relative humidity-for 50% RH it is typically 1-2 water molecules. Assuming 1.5 waters and m ion /m 0 = 2.25 this would a Shows the name of the experiment, used for reference. An asterisk (*) next to the name indicates that sulfuric acid was measured during the experiment b Length of the period (P) where a P of 4 h means that the experiment had 2 h of γ-rays on and 2 h of γ-rays off c Number of repetitions (periods) of the experiment d Scan range of the DMA, which was narrowed in later runs without changing the scan-time to improve counting statistics e Setting of the UV light used to produce sulfuric acid, in percentage of maximum power. On 175 212 Each line shows the conditions and average m/q for a 4-h API-ToF mass spectrum without the CI. Column 1 shows the UV level as percentage of maximum power. Column 2 shows whether the γ-ray sources were on or off. Column 3 is the average m/q of the spectrum. Column 4 is the average mass of the spectrum, when 1 water (m/q 18) has been added to all masses except the first four sulfuric acid peaks (m/q 97, 195, 293, 391) which has 1.5 water per sulfuric acid give a wet mass of 281 AMU. However, the experiments were performed at lower RH than 50% and also note that hydrogen sulfate ions attract more water than the neutral sulfuric acid molecule 32 . Last, the positive ions were not measured and these are typically heavier than the negative ions 27 . Data availability. The data generated during the current study are available from the corresponding author on reasonable request.
9,408
2017-12-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Experimental study of the 66 Ni ( d , p ) 67 Ni one-neutron transfer reaction J. Diriken,1,2 N. Patronis,1,3 A. Andreyev,1,4,5 S. Antalic,6 V. Bildstein,7 A. Blazhev,8 I. G. Darby,1 H. De Witte,1 J. Eberth,8 J. Elseviers,1 V. N. Fedosseev,9 F. Flavigny,1 Ch. Fransen,8 G. Georgiev,10 R. Gernhauser,7 H. Hess,8 M. Huyse,1 J. Jolie,8 Th. Kröll,7 R. Krücken,7 R. Lutter,11 B. A. Marsh,9 T. Mertzimekis,12 D. Muecher,7 R. Orlandi,1,5,13 A. Pakou,3 R. Raabe,1 G. Randisi,1 P. Reiter,8 T. Roger,1 M. Seidlitz,8 M. Seliverstov,1,9 C. Sotty,10 H. Tornqvist,14 J. Van De Walle,14 P. Van Duppen,1 D. Voulot,9 N. Warr,8 F. Wenander,9 and K. Wimmer7 1KU Leuven, Instituut voor Kernen Stralingsfysica, Celestijnenlaan 200D, B-3001 Leuven, Belgium 2Belgian Nuclear Research Centre SCK CEN, Boeretang 200, B-2400 Mol, Belgium 3Department of Physics and HINP, The University of Ioannina, GR-45110 Ioannina, Greece 4Department of Physics, University of York, YO10 5DD, United Kingdom 5Advanced Science Research Center, Japan Atomic Energy Agency (JAEA), Tokai-mura, 319-1195, Japan 6Department of Nuclear Physics and Biophysics, Comenius University, 84248 Bratislava, Slovakia 7Physik Department E12, Technische Universität München, D-85748 Garching, Germany 8IKP, University of Cologne, D-50937 Cologne, Germany 9AB Department, CERN 1211, CH-Geneva 23, Switzerland 10CSNSM, CNRS/IN2P3, Universite Paris-Sud 11, UMR8609, F-91405 ORSAY-Campus, France 11Fakultät für Physik, Ludwig-Maximilians-Universität München, D-85748 Garching, Germany 12INP, NCSR “Demokritos”, GR-15310, Ag. Paraskevi/Athens, Greece 13School of Engineering, University of the West of Scotland, Paisley, PA1 2BE, United Kingdom, and the Scottish Universities Physics Alliance (SUPA) 14PH Department, CERN 1211, CH-Geneva 23, Switzerland (Received 25 February 2015; revised manuscript received 28 April 2015; published 20 May 2015) The main reason for this enhanced collectivity is believed to be a combination of the reduction of a somewhat shallow N = 40 shell gap (because of the repulsive πf 7/2 νg 9/2 tensor interaction when protons are removed [17]) and the presence of the νg 9/2 -d 5/2 -s 1/2 orbital sequence directly above this gap, which could strongly enhance quadrupole collectivity [18][19][20].The latter is supported by the fact that large-scale shell-model calculations that do not include the νd 5/2 orbital in their valance space fail to reproduce the experimental trends [21].In contrast, recent calculations encompassing enlarged valence spaces including the νd 5/2 orbital provide better agreement with the experimental data [19,22].It should be noted that in the calculations of Ref. [19] the quadrupole-quadrupole interaction of the νg 9/2 , d 5/2 orbitals is increased by 20% to correct for the absence of the νs 1/2 orbital in the valence space.The effect of the quadrupole coherence generated by this quasi-SU(3) sequence ( j = 2), containing the νg 9/2 d 5/2 (s 1/2 ) partners, depends on their relative energy separation and thus on the N = 50 gap size.Recent calculations have shown that this particular gap size depends, because of three-body monopole forces [23], on the occupancy of the νg 9/2 orbital itself (see Fig. 3 in Ref. [24]).These calculations suggest that the N = 50 shell gap is established when the νg 9/2 orbital gets filled with neutrons and thus widens when approaching 78 Ni (estimated gap size ≈ 5 MeV), hinting to a robust shell closure for the latter [24].Near the N = 40 nucleus 68 Ni the N = 50 shell gap is considerably weaker, which can lead to enhanced quadrupole collectivity. The calculations in Ref. [19] assume that the N = 50 shell gap evolves in a similar manner as observed in the Zr isotopes [25] in combination with an estimated N = 50 g 9/2 -s 1/2 gap size of 5 MeV in 78 Ni.Experimental input on the size of the N = 50 shell gap near 68 Ni would provide valuable information for these large-scale shell-model calculations as it can serve as an anchor point for the gap-size evolution [19].Calculations using three-body forces and information from the Zr chain resulted in an estimated N = 50 gap size of 1.5-2 MeV near N = 40 [19,24,25]. Among the less exotic nickel isotopes, only for the peculiar case of 68 Ni the experimental data leads to unresolved, conflicting pictures.B(E2) measurements revealed a clear local minimum in the B(E2; 2 + 1 → 0 + 1 ) systematics and a maximum in the excitation energy of the 2 + 1 state [1][2][3][4].This common fingerprint of magicity, along with the existence of μs isomers in this region [26], is in conflict with mass measurements, where S 2n systematics do not reveal an irregularity at N = 40 [27,28].This apparent anomaly was attributed to the parity change between the pf shell below and gd orbitals above the N = 40 harmonic oscillator shell gap, requiring at least two neutrons to be excited to form a 2 + state.From an extreme single-particle shell-model perspective, 67 Ni can be described as a one-neutron hole coupled to 68 Ni and hence its excitation spectrum is expected to contain a considerable amount of neutron single-particle strength at low energy, mainly from the empty orbitals and hole states from the filled orbitals. Spectroscopic information on 67 Ni is available from a range of experiments [26,[29][30][31][32][33][34][35].Data from β decay provided tentative spin assignments and proposed configurations for the lowest excited states up to and including the 9/2 + isomer [29].Deep inelastic and multinucleon transfer reactions identified the position of higher-lying excited states [30][31][32][33] and the spins of the first three states was fixed [31,33].In the most recent deep-inelastic study in Ref. [33], yrast states up to 5.3 MeV were identified, all built on top of the 1007-keV isomer.The magnetic moment of the ground state was measured and its value of 0.601μ N differs by only 6% from the expected Schmidt value, hinting towards a very pure νp 1/2 ground-state configuration [34].Finally, from the measurement of the g factor of the 13.3-μs [26] isomeric 9/2 + state at 1007 keV resulted a value smaller by a factor of two than expected for a 1g 9/2 configuration [35].This reduction was attributed to a 2% admixture of proton 1p-1h M1 excitations (f −1 7/2 f 1 5/2 ) across the Z = 28 gap that would strongly affect the g factor [35].The study in Ref. [33] has shown that the 313-to 694-keV γ -decay sequence has a stretched-quadrupole character.The 13.3-μs half-life of the delayed 313-keV transition is compatible with an M2 transition, while the 150(4)-ps [36] 694-keV transition is consistent with an E2 character.The combination of all this information firmly fixes the spin sequence for 1007 keV, 694 keV, and the ground state to be 9/2 + , 5/2 − , and 1/2 − respectively. One-neutron transfer reactions that populated states in both 67,69 Ni are a powerful tool to probe the stability of the N = 40 subshell closure, test the single-particle character of excited nuclear states, extract the centers of gravity of the neutron orbitals of interest, and determine the size of shell gaps. In this paper we present the results of a study of 67 Ni produced in a 66 Ni(d,p)-reaction (Q value, 3.580 MeV [30,37]), favoring transfer with low values.The obtained experimental angular distributions are compared with distorted-wave Born approximation (DWBA) calculations, allowing spin and parity assignments and relative spectroscopic factors to be reported. The main findings of this work have already been published in Ref. [38].In this paper more details on the experimental conditions and the analysis will be presented.In Sec.II details about the experimental setup and measuring conditions are summarized and the newly developed delayed-coincidence technique is discussed.The analysis of the data is presented in Sec.III leading to the results reported in Sec.IV.In Sec.V the obtained results are compared with systematics in the lighter nickel isotopes and proton single-particle systematics in the N = 50 isotones near 90 Zr.The results are also compared with shell-model calculations including an enlarged neutron valence space. A. Beam production and manipulation The radioactive 66 Ni beam (T 1/2 = 54.6 h [39]) was produced at the ISOLDE facility in CERN by bombarding a 50-g/cm 2 UC x target with pulses of 1.4-GeV protons with an intensity of ∼6 × 10 12 protons per pulse (average current of 1μA).The interval between these pulses was always an integer multiple of 1.2 s.The target matrix was heated to a temperature of ∼2000 • C to optimize diffusion and effusion times through the tungsten transfer line towards the ionization cavity.Here the nickel isotopes were selectively ionized in a three-step resonant laser ionization process (λ 1 = 305.1 nm, λ 2 = 611.1 nm, λ 3 = 748.2nm) using the RILIS laser ion source [40,41].Because of the temperature of the hot cavity, elements with low ionization potentials (IP) can be surface ionized and cause contaminants, such as gallium (Z = 31, IP = 6.0 eV), to appear in the beam.The level of contamination was checked by comparing data with the RILIS lasers ON (data containing both nickel and contaminants in the beam) with data in laser OFF mode (only contaminants).From this comparison a beam purity of at least 99% 66 Ni was determined. The positively charged nickel beam was extracted from the ion source by applying a 30-kV electrostatic potential and was subsequently sent through the general purpose separator, resulting in a 66 Ni beam which was injected in REX-TRAP [42].In this Penning trap the beam was accumulated during 30 ms and cooled by interactions with the buffer gas present (usually Ne or Ar).This bunch of ions was thereafter transferred to REX-EBIS, the electron beam ion source, where the ions were brought to a higher charge state (16 + ).This leads to an A/q value of 4.125 that does not allow residual background from REX-EBIS.The time necessary to reach this charge state (28 ms) was optimized for the element of interest.Trapping time in REX-TRAP equals this breeding time to synchronize the system. The bunch of highly charged isotopes was extracted from REX-EBIS and sent through an A/q separator to select one specific 66 Ni charge state without contamination from the residual gas ions [43].For this experiment, the slow extraction technique from REX-EBIS (i.e., a smooth drop of the trapping potential) was used to maximize the spread of the available ions within the 800-μs bunch window. Afterwards, the beam was accelerated by the REX accelerator, which consists of a low-energy RFQ (max 300 keV/A), IHS structure (up to 0.8 MeV/nucleon), and a high-energy section (0.8-3.0 MeV/nucleon) containing three seven-gap resonators and one nine-gap resonator [44], before being delivered to the experimental setup.The final energy depends on the A/q of the beam and was 2.95 MeV/nucleon in this case.The global transmission efficiency of REX (including trapping and charge breeding) was of the order of 5%-10%. A 100-μg/cm 2 thick CD 2 target was placed in the center of the scattering chamber.The target purity was found to be 88% based on the ratio of elastically scattered protons and deuterons.The average beam intensity during the 10-day experiment equaled 4.1 × 10 6 pps, with a center-of-mass (CM) collision energy of 5.67 MeV. B. Detection arrays and signal handling The scattering target was surrounded by two sets of detection arrays: the T-REX charged-particle detection setup [45] and the Miniball (MB) γ array [46,47]. The T-REX charged-particle detection setup consisted of eight silicon E-E telescopes ( E thickness, 140 μm; E thickness, 1000 μm), four in both forward and backward directions (with respect to the target), covering an angular range from 27 • to 78 • in the forward and from 103 • to 152 • in the backward direction [45].Each telescope consisted of 16 resistive position-sensitive strips oriented perpendicular to the beam direction, to allow position determination of detected particles.Calibration of the E detectors was done using a quadruple α source ( 148 Gd, 239 Pu, 241 Am, and 244 Cm).The shielded E rest detectors were calibrated using the Compton scattering of high-energy photons from 60 Co and 152 Eu γ -ray sources detected by TREX-Miniball coincidences.Also data from stable beam reaction experiments [e.g., 22 Ne(d,p) 23 Ne] was used to improve the quality of the calibration.During the calibration process it was found that the full energy signal of the E detector depends on the position of the hit along the strip.All full-energy signals were hence corrected for this problem with parameters extracted from the measurement with the α source, using the relationship E corrected = E measured /[1 − (0.5 − x)A], with A = 0.035 and x the normalized position along the strip (x = [0,1]).The global energy resolution of protons emitted in the (d,p) reaction detected by the E-E telescopes was determined by the combination of intrinsic detector resolution, position uncertainty, beam-spot size, energy losses, and angular dependence of the particle kinematics, and was of the order of 1300 keV full width at half maximum (FWHM).When using α sources typical energy resolutions of 55 keV were achieved.The forward quadrants were shielded by a 12-μm Mylar foil to reduce the amount of incident elastically scattered particles at laboratory angles greater than 70 • , where the incident rate was high and kinetic energy of the particles low because of the reaction taking place in inverse kinematics.The influence of the Mylar foil on the detected energy of protons resulting from a (d,p) reaction is discussed in Sec.III A. The particle detectors were divided in two trigger groups (top-left and bottom-right), with as trigger condition either a hit in the E or E part in one of the quadrants of the trigger group.The 64 channels of the position sensitive strips were divided over two Mesytec MADC-32 modules (with internal time stamping) while the remaining signals (full E energy and E energy) were all connected to a separate MADC-32. Initially during the experiment a significant amount of background events was noticed in the backward quadrants of T-REX, directly proportional to the instantaneous beam intensity and target thickness.The combination of the slow extraction from REX-EBIS (see Sec. II A) and a reduction in beam intensity were necessary to control this problem, which was caused by random summing of δ electrons created by the heavy-ion beam interacting with the CD 2 target or target holder material [48]. To detect the γ rays that were emitted after the population of 67 Ni in an excited state, eight Miniball cluster detectors were positioned around the scattering chamber [46].Each Miniball cluster was composed of three hyperpure germanium crystals, which were sixfold electrically segmented.The high granularity of the Miniball array allowed a precise determination of the direction of the detected γ rays, which was necessary to perform a Doppler correction of the detected γ -ray energy.This was needed as the decaying nuclei traveled at speeds around 0.08c while emitting γ rays, leading to Doppler shifts of the emitted wave lengths.The position of all clusters was determined with high accuracy by analyzing the data from the 22 Ne(d,p) 23 Ne reaction with known incoming energy and by measuring the Doppler shift of the 1017-keV line for each segment.The signals from the Miniball array were digitally handled by a series of digital gamma finder (DGF) modules, with an energy range of nearly 8 MeV.Energy calibration and efficiency determination were done using 152 Eu and 207 Bi sources.For the high-energy part of the spectrum, data from the β decay of a stopped 11 Be beam (T 1/2 = 13.76 s), including transitions up to 7.97 MeV, were used [49].The total photopeak efficiency for 1-MeV γ transitions was found to be 5.9%.As the energy resolution of the detected protons in T-REX was insufficient to disentangle individual excited states purely based on proton kinematics, proton-γ coincidences were necessary to obtain angular distributions.A similar strategy was used in one-nucleon transfer reactions on stable nuclei to extract angular distributions for unresolved levels, like, e.g., 64 Zn(d, 3 Heγ ) and 64 Ni(d, 3 Heγ ) [50,51]. Data were acquired during the 800-μs beam ON window, during which a bunch of ions was ejected from REX-EBIS and accelerated by REX.After this window was closed, the obtained data were read out and another 800-μs beam OFF window was started, encompassing natural background and βdecay radiation of isotopes stopped in the scattering chamber.The REX duty cycle is sufficiently long to allow acquisition and readout of both windows before the next pulse.All detected signals were directly time-stamped by internal clocks running at 40 MHz. C. Delayed-coincidence technique In Sec.I the currently available experimental data concerning the 1007-keV 9/2 + isomeric state (T 1/2 of 13.3 μs) in 67 Ni were discussed.The μs lifetime of this state inhibits the analysis of prompt proton-γ coincidences with Miniball and thus no angular distributions based on γ gates could be produced.For this purpose a delayed-coincidence (DeCo) technique was developed encompassing a thick (≈ 60 μm), removable aluminum foil used to stop the incoming beam and a dedicated coaxial germanium detector with the purpose of detecting the isomeric, delayed transitions of 313 and 694 keV emitted during the decay of the 1007-keV isomeric state.It should be noted that the 1007-keV isomeric state can be populated either as a result of direct population in the transfer reaction, or when an excited state with a higher excitation energy is produced which decays subsequently (promptly) to the 1007-keV isomeric state.The aluminum foil was positioned 2 m downstream of the target position and renewed every 8 h to limit the background originating from accumulating β-decaying nuclei, mainly 67 Cu.The coincidence window between γ rays detected in the delayedcoincidence chamber and particles detected by T-REX was asymmetrically set to 120 μs, ranging from −40 μs to 80 μs with the particle time stamp as the reference point.The time relation between the detected protons and γ rays is shown in Fig. 1 in Ref. [38] for the 313-keV transition (left and right background next to the 313-keV transition is subtracted) and shows the definition of the delayed and random-delayed windows, which are both 40-μs long.The delayed-coincidence time window hence accounts for 87.5% of the isomeric transitions.As a comparison, the time relation between γ rays detected in Miniball and protons detected in T-REX is given in Fig. 1, showing the more narrow coincidence window.In the case of Miniball-T-REX coincidences, the detected radiation is either prompt or random as defined in Fig. 1. As the time of flight between the reaction target and the delayed-coincidence setup was of the order of 80 ns, losses from in-flight γ decays were negligible.The exponential shape has a fitted half-life of 13.7(6) μs which is in good agreement with the previously measured values of 13.3(2) μs (Ref.[26]) and 13(1) μs (Ref.[35]) and confirms the weighted average of 13.3(2) μs [26]. The efficiency of the delayed-coincidence detection setup was determined in two steps: using a calibrated 152 Eu point source at the position of the aluminum foil (absolute photopeak efficiency using a point source) and also by using the reaction data itself by comparing the intensities of the prompt γ transitions arriving on top of the isomer with the intensity of 313 and 694 keV in delayed coincidence with these events.The second step also includes the effect of a non-point-like source and the transmission efficiency between the reaction target and the delayed-coincidence setup.By comparing the results from both steps, this transmission efficiency could be determined.As an example, Fig. 2 1 for definition of Miniball timing windows.In the prompt spectrum most lines belonging to the γ decay of 67 Ni can be clearly identified, while only traces of the most intense lines remain in the random spectrum together with a broadened β-decay line around 1039 keV ( 66 Cu → 66 Zn).Energies of the most prominent γ rays are indicated in keV. isomer. Figure 2(b) shows the inverse situation as the delayedcoincidence spectrum is shown, requiring a prompt 1201-keV transition in Miniball.One can compare the 1201-keV intensity in Fig. 3 depending on the gate photopeak efficiency of Miniball (∝ MB,1201 ), with the intensity of either the 313or 694-keV transitions in Fig. 2(b) which is proportional to the product of the delayed-coincidence detector photopeak efficiency, gate photopeak efficiency of Miniball, and the transmission efficiency (∝ DeCo,313 or 694 MB,1201 Trans ).The integral counts of each peak are evaluated through a fit procedure (Gaussian shape).In the case of doublets (like the 1184-to 1201-keV and the 1331-to 1354-keV doublets) the fit procedure allows one to disentangle each contribution, which is then used in the efficiency calculation.The uncertainties that results from this fitting procedure are included in the obtained peak integral.The product of DeCo,313 or 694 and Trans defines the global efficiency for detection of the 313or 694-keV transition in the DeCo set-up.As all parameters except the transmission efficiency were known from source data, the transmission efficiency from the target position to the delayed-coincidence detection setup could be determined.An overview of these efficiencies is given in Table I, leading to an average transmission efficiency of 53 (6)%. III. ANALYSIS A. Data structure The event-by-event structure of the data allowed to construct particle-γ coincidences by placing a 1-μs coincidence window around the time stamps of the detected signals. The effective particle-γ MB time structure within these events is shown in Fig. 1, indicating that the majority of the γ rays detected within 1 μs of a proton is indeed prompt radiation resulting from transfer reactions.Events outside of the ±0.5-μs time window were because of higher-multiplicity events and shifts from the walk correction applied to the time stamps of low-energy γ rays.The data in the random time window of Fig. 3 were scaled based on the integrals of γ rays originating from β-decaying nuclei implanted in the detection chamber in the prompt and random time window.The prompt nature of the radiation is also evident in Fig. 3, where the corresponding γ spectra are shown for both prompt and random proton-γ timing conditions.The data in the random spectrum are limited and only contain a doubly humped structure around 1039 keV, the dominant transition in the β decay of 66 Cu (note that no γ rays are emitted in the β decay of 66 Ni) [52], in which shape is from the Doppler correction procedure.Traces of the most intense prompt transitions, Compton background of the 1039-keV transition and radiation from the REX-accelerator, are also observed. After the event building and calibration of the raw, detected signals, the kinematic reconstruction of the events was performed.In the case of the γ rays detected by Miniball, the add-back procedure was performed by summing γ -ray energies detected within the same cluster.The segment in which the highest energy was deposited is chosen as the primary interaction point and provided the direction used for Doppler correction [46].TABLE I. Overview of the efficiency of the delayed-coincidence setup for the two delayed transitions of interest with energies of 313 and 694 keV.The first row includes the absolute photopeak efficiency for the germanium detector obtained from source data.Furthermore, the global efficiency, determined using three prompt T-REX Miniball gates, is given.The weighted averages of these different gates (line 4) are used in the analysis of the data.Finally, the transmission efficiency from the comparison between the absolute photopeak efficiency with the global efficiency is shown.This transmission efficiency from the target position to the decay correlation detection setup also incorporates the fact that the spread of ions on the stopper foil is not a point source.In the case of γ rays detected in the delayed-coincidence setup, 120-μs-wide coincidence windows were applied.Delayed-coincident γ rays could in principle be assigned to several light, charged particles (p, d, t, α, and 12 C) detected by T-REX within the 120-μs time window.However, the data showed that after kinematical identification (see next paragraph) 95% of the delayed-coincident γ rays were uniquely assigned to a single proton. Gate Particle identification was performed based on their E-E signature for particles detected in the forward direction (θ LAB < 90 • ). Figure 4 illustrates the separation between identified deuterons and protons in one strip of the forward E-E telescope.In the backward direction all protons were stopped in the E detector and hence the E rest detector served as a veto to filter out electrons.Note that no elastically scattered particles are emitted in the backward direction. Energy corrections were applied to the detected particles for energy losses in the Mylar foil (only forward direction) and target (all directions).These corrections were obtained by calculating the range of the detected particles in, e.g., the Mylar foil based on the detected energy, adding the effective thickness of the foil to this calculated range and calculating the energy needed to obtain this combined range.Finally, based on the proton kinematics (energy and position of the detected proton), the corresponding excitation energy of 67 Ni was calculated based on the missing mass method. B. 67 Ni Level scheme To construct the level scheme, information from (proton-) γ γ coincidences, (proton-)DeCo-γ coincidences (see Fig. 2), and coincident initial excitation energy (from the missing mass method) was combined.An instructive figure combining data from Doppler corrected γ -ray energy in Miniball and initial excitation energy is shown in Fig. 5, which can be used as a first guide to construct the level scheme and determine the (order of the) decaying γ transitions.Events situated on the solid line correspond to transfer reactions that populate a specific excited state which subsequently decay by the emission of one γ ray directly to the ground state.Already from this figure one can clearly identify substantial feeding of excited states at 1724 and 3621 keV, followed by direct decay to the 67 Ni ground state. The most detailed information can be obtained from the combination of proton-γ γ coincidences and the corresponding incoming excitation energies.An example is given in Fig. 6(a), where proton-γ γ coincidences are shown with a gate on the 1724-keV transition.Two strong transitions are clearly visible.The order of the 483-, 1724-, and 1896-keV γ rays can be determined by plotting the incoming excitation energy of 67 Ni deduced from the missing mass method for each of these transitions.The spectrum for 1724 keV shows multiple peaks, with the one at lowest energy around its transition energy of 1724 keV.The other gates have their first peak at higher energies, revealing that 1724 keV is a ground-state transition.These two other transitions are placed directly on top of the 1724-keV transition as the position of the first peak in their excitation energy spectrum matches the sum of 1724 keV and the γ -ray gate energies, defining two states at 2207 and 3621 keV. Repeating this analysis for all possible γ gates allowed one to create the level and decay scheme of 67 Ni shown in Fig. 7.As a consistency check a comparison was made between the experimental excitation spectrum (or feeding probability) deduced from all detected protons in singles and a reconstruction based on the proposed level scheme (Fig. 7), and measured γ -ray intensities.This comparison is presented in Fig. 8.The normalization of both feeding probabilities was based on the integrals of both curves up to an energy of 5400 keV to exclude the influence of the elastic proton peak at 6.4 MeV.In the reconstructed curve the ground-state feeding was left as a free variable and a 4(1)% contribution was found based on an iterative procedure.For each state the data from the γ intensity were folded with a Gaussian distribution with FWHM of 800 keV (obtained from the experimental data).The good overall agreement between the excitation spectrum obtained from proton energies alone and the reconstructed curve based on γ intensities supports the proposed level scheme and the procedure to rely on proton-γ coincidences to extract angular distributions. Ni Excitation energy [keV] A final note should be made on the region above 4-MeV excitation energy.When searching for γ rays originating from this excitation energy in 67 Ni, some direct ground-state transitions can be seen, as well as most of the γ rays found at low excitation energy in the level scheme (e.g., 694, 1201, and 1724 keV), but in Fig. 5 transitions connecting these highly excited states with those at lower excitation energy are not observed.This nonobservation might be from the higher level density at high excitation energy and the large variety of possible decay paths; (d,p) experiments on lighter nickel isotopes at comparable CM energies have shown that at high excitation energy a large number of states are populated with somewhat small cross sections, supporting this statement [53][54][55][56][57][58][59][60].The reconstructed curve in Fig. 8 for excitation energies higher than 4 MeV was corrected for this missed top-feeding by comparing the intensities of the γ rays placed in the low-energy part of the level scheme with the direct ground-state decay.From this analysis the total amount of missed γ -ray intensity was found to be 50% of the total intensity. In Fig. 5 a strong signal above 6-MeV excitation energy can be seen, mostly random coincidences with low-energy γ rays and the 1039-keV transition ( 66 Cu β decay).This 6.4-MeV excitation-energy signature corresponds to elastically scattered protons (impurities in the target), which are in random coincidence with background radiation.This strong signature is also visible in Fig. 8 C. Normalization To normalize the measured angular distributions and obtain absolute cross sections, the beam intensity must be known.Here elastically scattered deuterons were used to determine the beam intensity by scaling the differential elastic cross section to the experimental data as N = I t dσ d ρd A N A P d D , with I the average beam intensity, t the measuring time, dσ d the differential cross section, ρd A N A the number of target nuclei per unit surface, P d the target purity, and D the efficiency for detecting deuterons, including losses in the particle identification.This last angle-dependent parameter is obtained from GEANT4 simulations [45,61].All these quantities except the average beam intensity are known.As the detection range for deuterons was limited from 35 • to 50 • , it was not possible to fit the optical potentials to the available data and hence global optical model potentials (GOMPs) have been used.Figure 9 shows the comparison of three differential cross section calculated with the program FRESCO [62] using different GOMPs available from literature [63][64][65], with the GOMP from Ref. [65] giving the best agreement because of the larger Coulomb radius.The most important optical model potential parameters used are summarized in Table II.A total average beam intensity of 4.1(3) × 10 6 pps was found using this analysis. By normalizing the transfer data to the elastic scattering of deuterons, uncertainties in physical properties of the target can be neglected as both data sets are obtained under the same conditions and hence do not depend on the properties of the target. D. DWBA analysis The theoretical transfer-reaction angular distributions were calculated using the DWBA code FRESCO [62].For the incoming channel potentials from Ref. [65] were used.As the range of identified elastically scattered protons is insufficient to fit the optical-model potentials to the data, four sets of GOMPs available from literature can be used to describe the outgoing channel [66][67][68][69].The main difference between these sets is that the former two GOMPs include a real volume part, while the latter two don't.In this analysis the GOMPs from Ref. [66] were used, however, the shape of the angular distributions does not vary significantly between the different sets of potentials, while variations in the magnitude of the differential cross section are limited to 10%.An overview of the optical model potential parameters used can be found in Table II.To calculate the wave functions of the neutron bound in 67 Ni, a Woods-Saxon potential was used with standard radius and diffuseness parameters of r = 1.25 fm and a = 0.65 fm.The depth of this potential is rescaled to reproduce the correct neutron binding energy. The low CM energy of the reaction (5.67 MeV) justifies the use of DWBA over ADWA as the influence of deuteron breakup is negligible at this CM energy [70].The influence of nonlocality in the reaction as discussed in Ref. [71] was assessed and limited influence on the calculated differential cross sections was found.The variations in the extracted relative spectroscopic factors from this nonlocality were found to be of the order of 10% at most and did not change the results within the quoted error bars. As the absolute scaling factors between the calculated cross sections and the experimental data at energies near the Coulomb barrier depend both on the optical model potentials and geometry of the single-particle binding potentials, absolute spectroscopic factors (C 2 S ) cannot be quoted reliably [72].Therefore only relative spectroscopic factors (with respect to the 1007-keV isomer originating from the νg 9/2 orbital) and asymptotic normalization coefficients (ANCs) will be reported here.The choice for the 1007-keV state was based on the available experimental data discussed in Sec.I indicating the high spectroscopic purity of this state.Calculations for all populated states were performed assuming pure configurations (spectroscopic factor = 1) with angular momenta of s 1/2 , p 1/2,3/2 , d 5/2 , f 5/2 , and g 9/2 . From the experimental data, angular distributions were obtained by requiring double gates on excitation energy (proton kinematics) and coincident γ -ray energy, similar to the analysis in Ref. [51].The width of excitation-energy window was set to 600 keV to reduce possible distortion from γ feeding from higher lying levels.By using this width, only 70% of all events were included, because the FWHM of these peaks in the excitation-energy spectra is about 800 keV.In case of a small separation between excited states, connected by an intense γ -ray transition (e.g., 1724 and 2207 keV, connected by the 483-keV transition), the contribution of the 2207-keV state was explicitly subtracted by combining spectra from different gates.A complete list of all gates used can be found in Table III.Angular distributions were obtained in the laboratory frame of reference in 5 • bins, all individually efficiency corrected, with coefficients obtained from GEANT4 simulations [45,61].Depending on the γ -decay pattern, multiple γ gates could be used to obtain an angular distribution for a specific state.In this case, the angular distributions were created for all these possible gates, including individual corrections for γ -detection efficiency, before creating the global angular distribution from the weighted average.If applicable, delayed-coincidence data were included for states decaying via the isomeric state at 1007 keV.Only for the ground state a single gate on excitation energy was used because of the lack of (delayed) coincident γ rays. IV. RESULTS The extracted angular distributions and comparison with DWBA calculations can be found in Fig. 10 and an overview of the extracted spectroscopic information is given in Table III, along with information from previous experiments [26,[29][30][31][32][33]35].States compatible with an transfer between 0 and 4 have been observed.Angular distributions could only be extracted for states up to 3621 keV.Above this excitation energy the kinetic energy of the emitted protons in the backward direction (small CM angles) becomes low and 67 Ni.The second column shows the γ rays used as gates to obtain the angular distribution.In case no unambiguous assignment could be made based on the measured differential cross section, all possible values have been included in column 4. The underlined value is the adopted one based on additional spectroscopic information.If available, information from Refs.[29][30][31] the proton energies drop below the detection threshold.In the forward direction the range becomes more confined as these protons have insufficient energy to leave a E-E signature. Hece only large CM angles can be used for these states making assignments cumbersome.The combination of angular momentum fits and information from γ -branching ratios allowed one to fix spin and parities of seven excited states. Based on allowed β decay to the 3/2 − ground state of 67 Cu (log f t ∼ 4.7), a tentative spin assignment of (1/2 − ) was made for the 67 Ni ground state [73].This assignment is further supported in the quasielastic reaction work of Ref. [31] where the observed angular distribution fits with a (1/2 − ) spin.A (1/2 − ) spin is also compatible with shell-model predictions. FIG. 10.Experimental angular distributions for different states in 67 Ni.The two or three best fits are presented for each case (solid line for best fit, dashed line for second best, and dotted-dashed for third best).can be firmly fixed to 1/2 − [33].The measured proton angular distribution in the current (d,p) experiment shows a peak near 20 • in the CM frame of reference, in good agreement with an = 1 transfer, supporting the 1/2 − assignment.However, no distinction can be made between 1/2 − and 3/2 − based on the transfer-reaction data alone.Assuming a νp 1/2 configuration for the ground state, the relative spectroscopic factor is compatible with 1 (0.5 in case of νp 3/2 ), indicating a significant single-particle contribution to the wave function.This was already suggested from the measurement of the magnetic moment of the ground state, where a value deviating by only 6% from that of a pure configuration was found [34].For the proton angular distribution of the ground state, only a single gate on excitation energy was required.Because of the limited energy resolution, the proton angular distribution can be distorted by both the 694-and the 1007-keV state, leading to an overestimation of the differential cross section and hence also of the relative spectroscopic factor. The first excited state at 694 keV is weakly populated and the observed proton angular distribution fits with = 1, 2, and 3.The allowed β decay from the (7/2 − ) ground state of 67 Co provided a log-ft value compatible with a νf −1 5/2 configuration proposed in Ref. [29].Furthermore, several arguments support a spin and parity assignment of 5/2 − for this state: Recent deep-inelastic scattering work has shown that the 694-to 313-keV sequence has a stretched quadrupole character [33] which, combined with the measured lifetimes, fixes the spin of the 694-keV state to be 5/2 − .As a final remark it should be noted that the small relative spectroscopic factor assuming a νf 5/2 configuration for this state is indeed expected as the νf 5/2 orbital is presumed to be almost fully occupied in 66 Ni. The excited state at 1007 keV agrees with an = 3 (χ 2 red = 2.4) or = 4 (χ 2 red = 1.8) transfer.This state had previously been assigned a (9/2 + ) based on its isomeric features and similarities with 65 Fe [26].The isomeric features, stretched quadrupole character of the 313-to 694-keV sequence and absence of a ground-state transition favor an = 4 description, resulting in a delayed 313-keV M2 transition and spin and parity assignment of 9/2 + .As this isomeric state decays via a cascade of two delayed γ rays, the only way to obtain a proton angular distribution was to use the delayed-coincidence technique. In the work of Ref. [31] excited states with energies in the vicinity of the 694-keV and 1007-keV states reported here were observed.However, the spin assignments for these states are reversed and for the 1007-keV isomer a spin of 3/2 − was proposed.When the ejectile angular distribution for the 1140-keV state in Ref. [31] is compared with the calculation for a 9/2 + spin of the 770-keV state, the agreement is very reasonable when taking into account that all quoted excitation energies in Ref. [31] have an offset compared to the values reported here. In the case of the 1724-keV level the observed angular distribution of the transfer protons is in good agreement with an = 1 transfer.A spin and parity assignment of 3/2 − is preferred for two reasons.First, strong top feeding from the 5/2 + level at 2207 keV (E γ = 483 keV; see below) is observed where the inclusion of an E1 component is necessary as a pure M2 transition in the case of spin 1/2 − would be too slow to explain this strong γ branch.Secondly, there is the small γ -ray branch to the 5/2 − state at 694 keV, with an observed ratio of branching ratios I (1030)/I (1724) = 0.05.Using Weisskopf estimates the theoretical branching ratios would be 8 × 10 −5 for 1/2 − (respectively, E2 and M1 transitions) and 0.2 for 3/2 − (twice M1) spin and parity of the 1724-keV state.This argument supports the 3/2 − spin assignment for this excited state. The proton angular distribution of the excited state at 2207 keV fits well with both = 1 and 2. Because of the strong γ -decay link with the 9/2 + state at 1007, an = 2 interpretation is favored.A spin and parity of 5/2 + is strongly supported by the observed γ -branching ratios to the 9/2 + at 1007 keV, 5/2 − at 694 keV, 3/2 − at 1724 keV, and the absence of a direct connection with the 1/2 − ground state which rules out a 3/2 + assignment.The observed branching ratios towards the 3/2 − and 5/2 − states were used to make estimates of the B(E1)-transition rates to these negative parity states, assuming a single-particle d 5/2 -g 9/2 E2 transition.The obtained B(E1) estimates are of the order of 10 −4 -10 −6 , which are typical values for this mass region [74]. The 3277-keV state has the same characteristics as the 2207-keV state.The angular distribution of the transfer protons is best described by = 1 or 2 and this state lacks a direct ground-state transition.Because of this and in combination with a strong link with the 9/2 + state at 1007 keV, a 5/2 + spin assignment is adopted.Again, deduced B(E1) values for transitions to negative parity states are of the order of magnitude around 10 −5 , resulting in a similar interpretation as that of the 2207-keV state. In contrast with (d,p) experiments on lighter nickel isotopes, no states with = 0 character were unambiguously observed here, as their differential cross section strongly peaks for small CM angles where no experimental data are available.The angular distribution of the state at 3621 keV is best fitted with = 1 or 2, but = 0 cannot be totally discarded.As this state is only bound by 2.2 MeV, the calculated angular distributions do not show a strong dependence.Another feature is the peculiar γ decay of this state, with a strong branch to the 1/2 − ground state and weaker branch (7%) to the 3/2 − state at 1724 keV, favoring a low spin assignment for this state.The expected branching to the 1724-keV 3/2 − state for single-particle M1 or E1 transitions is 14%, in line with the observed value.If the spin was 3/2 or higher, one would expect to observe γ transitions towards 3/2 and 5/2 states at lower excitation energy.These have not been observed and thus the characteristic γ -decay path together with the information from the angular distribution limits the spin of this state to 1/2.In comparison with the lighter nickel isotopes, negative parity states are only observed at low excitation energy, while the excitation spectrum at higher energy is dominated by = 0, 2, and 4 transfers.The characteristic γ decay of an identified 1/2 + state in 61 Ni exclusively decays towards 1/2 − and 3/2 − states [75].In the 70 Zn( 4 He, 7 Be) 67 Ni work of Ref. [31] a state at 3.680 MeV is identified with proposed spin and parity of (3/2 − ).However, the authors note that the angular distribution is not very characteristic.Additionally, a difference in excitation energy quoted in [31] and this work can be noted.In the present experiment γ -ray energies were used to determine the excitation energies, which is more accurate compared to excitation energies from scattered particles.Finally, the relative spectroscopic factor for a νp 1/2 interpretation for the 3621-keV state would be close to 2, which is unphysically large.The combination of all these arguments favors a (1/2 + ) assignment for the 3621-keV state. For the other states included in the level scheme, no angular distributions could be extracted either because of limited statistics or confined angular range (mainly for excited states above 3.6 MeV).From a comparison with both lighter nickel isotopes [53][54][55][56][57][58][59][60] and odd-Z, even-N nuclei near the semimirror nucleus 90 40 Zr 50 , where protons occupy the same orbitals as the neutrons do around 68 Ni, it is expected that nearly the full pf strength is exhausted in the observed states at low excitation energy [76][77][78].In 62 Ni(d,p) 63 Ni and 64 Ni(d,p) 65 Ni reactions [57][58][59] values above 3 MeV are all identified as 0 or 2 and limited contributions of 4. At these excitation energies, the integrated calculated cross sections within the detectable range for s 1/2 ( = 0) and d 5/2 ( = 2) are nearly identical and hence also their relative spectroscopic factors are similar.As no conclusion can be drawn on their spin, these states are indicated as (0,2) in Table III as they are expected to exhibit νs 1/2 or νd 5/2 single-particle strength from systematics. A. Distribution of the single-particle strength The overview of extracted relative spectroscopic factors (relative to the 1007-keV 9/2 + isomer) can be found in Table III and is visualized in Fig. 13(a).The relative spectroscopic factor of the 1/2 − ground state, which is compatible with 1, indicates that the νp 1/2 and νg 9/2 orbitals in 66 Ni are nearly equally empty and is consistent with the measured magnetic moment of the ground state which is close to the Schmidt value [34].Furthermore, the experimental relative spectroscopic factors of the 5/2 − state at 694 keV and the 3/2 − state at 1724 keV, with proposed νf −1 5/2 and νp −1 3/2 configurations, respectively, are also small, but not zero.This hints towards a limited amount of neutron pair scattering across the N = 40 shell gap.A recent application of the sum rule [81], combining (d,p) and (p,d) reaction data on the stable even-A nickel isotopes, allowed one to extract the neutron orbital occupancies in these isotopes [79,80].In the case of 64 Ni, the heaviest stable isotope, already a 54% occupation of the νp 1/2 orbital is observed, along with a limited occupation of 6.6% of the νg 9/2 orbital.In Fig. 11, the sum of the spectroscopic factors of the neutron addition reaction (d,p) on 62−64 Ni for the νp 1/2 , p 3/2 , and f 5/2 orbitals relative to g 9/2 are shown as deduced from [79,80].The downward trend, observed for the p 3/2 and f 5/2 orbitals when moving towards heavier nickel isotopes, continues in 67 Ni and indicates the steady filling of these orbitals in the even-even nickel when moving from the N = 28 towards the N = 40 shell closures.However, for the p 1/2 orbital, from 63 Ni onwards, a deviation from the general downward trend is observed.This indicates an increasing neutron occupancy of the g 9/2 orbital, possibly combined with a similar occupancy of the p 1/2 in 64 Ni and 66 Ni (see Fig. 3 from [79]). The two = 2 states at 2207 and 3277 keV have both been assigned a 5/2 + spin and parity.From Table III it is clear that both states contain a nearly equal amount of νd 5/2 single-particle strength, accounting in total for 54(11)% of the observed νg 9/2 single-particle strength.Such twofold splitting is also observed in (d,p) reactions on 58,60,62,64 Ni from Refs.[53][54][55][56][57][58][59][60], with a total strength of 31%, 27%, 23%, and 34%, respectively.In those experiments a larger number of 5/2 + states was observed, but in all cases a considerable part of the νd 5/2 single-particle strength is concentrated in two low-lying 5/2 + states.Because of the rise in the Fermi surface, the energy of the positive parity 5/2 + and 9/2 + states relative to the negative parity ground state in the 59−67 Ni isotopes decreases steadily. Possible configurations which can form 5/2 + states at low excitation energy are a pure νd 5/2 excitation, a νg 9/2 neutron coupled to a 66 Ni core excitation (νg 1 9/2 ⊗ 2 + core ), νg 3 9/2 configuration, and a pf neutron coupled to a core octupole excitation (νpf ⊗ 3 − core ).The one-neutron transfer reaction is mainly sensitive to the first type of configuration as the latter two include multiparticle rearrangements.In one-neutron transfer reaction studies on lighter stable nickel isotopes using similar beam energies, the population of states with higher seniority was observed [58].The cross section for this kind of higher-order transfer reaction is considerably weaker than the direct reactions, in most cases by orders of magnitude (see, e.g., Table 3 in Ref. [58]).Therefore it is assumed that multiparticle rearrangements do not disturb the expected and measured cross sections in the data presented here.Furthermore, in the work of Ref. [33] a collection of high-spin positive parity states interpreted as νg 3 9/2 configurations was observed.None of these states are observed in the present (d,p) data, supporting the identification of νd 5/2 single-particle strength in these 5/2 + states. Within the present work two 5/2 + states with an equal and considerable amount of νd 5/2 single-particle strength were observed.This might hint that substantial mixing occurs between the pure νd 5/2 single-particle configuration and core coupled collective modes such as those discussed earlier.The amount of mixing and distribution of the different configurations over the resulting states depends on the initial energy of the unperturbed configurations.The energy of the νg 9/2 ⊗ 2 + core configuration should lie roughly 1.4 MeV above the 1007-keV 9/2 + state, while the calculations in Ref. [24] estimate the νg 9/2 -d 5/2 gap to be around 1.5-2 MeV.Hence a sizable amount of configuration mixing can indeed be expected. The C 2 S -weighted energy centroid from the neutron addition reactions for the observed states relative to the g 9/2 orbital are shown in Fig. 12, using the following expression: To estimate the N = 50 gap size in the nickel isotopes, the data from Refs.[53][54][55][56][57][58][59][60] were combined with the present set.The more recent data on the stable nickel isotopes [79,80] only include the strength of the νg 9/2 configuration and the negative parity states.Moreover, two assumptions were made: (1) All = 2 states observed in 59−65 Ni are 5/2 + states.(2) In the case of 67 Ni all observed states above 3 MeV are of the = 2 character.It should be stressed here that only 50% of the available νd 5/2 single-particle strength is unambiguously identified, assuming the full νg 9/2 single-particle strength is exhausted in the 1007-keV state.The value of the N = 50 gap derived from these experimental data is 2.6 MeV which is in agreement with the value used in the Hamiltonian of Refs.[19,82], however, the calculated d 5/2 single-particle The 3621-keV state with proposed spin and parity of (1/2 + ) and a relative spectroscopic factor of 1.1 shows a different distribution of the νs 1/2 single-particle strength in 67 Ni in comparison with the lighter nickel isotopes.In 62,64 Ni(d,p) 63,65 Ni experiments the largest relative spectroscopic factor observed for a 1/2 + state is 0.4, and in general the νs 1/2 single-particle strength is mostly fragmented [57][58][59]. B. Comparison with 90 Zr region The region around 90 Zr (Z = 40,N = 50) is often compared to the 68 Ni region (Z = 28, N = 40) as the protons in the former are expected to occupy the same shell-model orbitals as the neutrons in the latter.The level structure in 68 Ni and 90 Zr indeed looks similar.One-proton transfer data have been obtained from the 88 Sr( 3 He,d) 89 Y reaction [76] and direct comparison with our data is made [Fig.13(c)].Here the uncharacterized states above 3 MeV in 67 Ni are labeled as both = 0 and 2. A good agreement for the negative parity pf states below 2 MeV and the 9/2 + state can be seen, except for the position of the 5/2 − state.This shift towards higher energy in 89 Y can be attributed to the attractive πf 5/2 νg 9/2 tensor interaction, not present in 67 Ni, binding the πf 5/2 orbital more tightly.The ground-state relative spectroscopic factors are also similar.Data from the 88 Sr(d, 3 He) 87 Rb reaction have shown indications of a one-neutron occupancy in the πg 9/2 orbital in the ground state of 88 Sr [83]. However, a major difference in the structure of the positive parity sd states is visible as the = 0 and 2 strength is more fragmented and resides at higher energy in 89 Y while the νd 5/2 and νs 1/2 strength is concentrated and shifted to lower excitation energies in 67 Ni.In 89 Y, beside these 5/2 + states carrying single-particle strength, a low-lying 5/2 + state at 2222 keV was observed, which is, however, only very weakly populated in the ( 3 He,d) reaction [76].Also, this state can be reproduced by shell-model calculations omitting the πd 5/2 orbital in the valence space supporting a πg 9/2 ⊗ 2 + core interpretation [84].This distinction in the structure of the positive parity states in 89 Y and 67 Ni indicates a weaker N = 50 gap near 68 Ni compared to the Z = 50 gap near 90 Zr.The fact that in the 66 Ni(d,p) 67 Ni experiment sizable νd 5/2 single-particle strength is found at low excitation energy (relative to the 9/2 + 1 state) indicates the difference between these two regions and supports the importance of the νd 5/2 orbital on the nuclear structure in the vicinity of 68 Ni.From the distribution of the = 2 single-particle strength in 89 Y the Z = 50 gap size is estimated to be 3.9 MeV, which is indeed 1.3 MeV larger than the N = 50 shell gap near 68 Ni. VI. CONCLUSIONS A one-neutron transfer reaction using a radioactive 66 Ni beam accelerated to an energy of 2.95 MeV per nucleon was successfully performed at the REX-ISOLDE facility in CERN, using the T-REX and Miniball arrays in combination with a delayed-coincidence setup used to perform spectroscopy of the 1007-keV 13.3-μs isomer.Excited states up to 5.8 MeV were populated in the reaction and the level scheme of 67 Ni was extended up to this excitation energy.DWBA analysis was performed to characterize the measured differential cross sections of the populated excited states.Negative-parity pf states were observed at energies of 0, 694, and 1724 keV.Furthermore, the νg 9/2 character of the 1007-keV isomer was confirmed and two 5/2 + states on top of this isomer were identified.The trend of the spectroscopic factors and C 2 S weighted centroids relative to the νg 9/2 orbital observed in previous work for 59−65 Ni is continued in 67 Ni.The measured relative spectroscopic factors for the 5/2 + states show that half of the νd 5/2 single-particle strength is split in nearly equal parts over these two 5/2 + states, hinting to substantial mixing of the νd 5/2 configuration with collective core coupled modes.The estimated size of the N = 50 gap in 67 Ni was found to be 2.6 MeV, which shows no deviation from the gap size determined in the lighter 59−65 Ni isotopes. FIG. 1 . FIG. 1. Proton-γ time difference between γ rays detected inMiniball and protons detected by T-REX.The gray region defines the prompt proton-γ time window; other events are referred to as random coincidences.The width of the prompt window is determined by the timing resolution of the low-energy γ rays after the walk correction. FIG. 2 . FIG. 2. (a)Doppler corrected γ -ray spectrum in Miniball, delayed coincident with either 313 keV or 694 keV.Random-delayed events (see, e.g., Fig.1in Ref.[38]) and the delayed coincident background have been subtracted.(b) Delayed coincidence spectrum requiring a prompt proton-1201-keV event in Miniball.This spectrum was used to determine the delayed coincidence efficiency.See text for more information. FIG. 3 . FIG.3.(Color online) Doppler corrected Miniball γ -ray spectra, prompt proton coincident (black), and random proton coincident (red).See Fig.1for definition of Miniball timing windows.In the prompt spectrum most lines belonging to the γ decay of67 Ni can be clearly identified, while only traces of the most intense lines remain in the random spectrum together with a broadened β-decay line around 1039 keV ( 66 Cu → 66 Zn).Energies of the most prominent γ rays are indicated in keV. FIG. 4 . FIG. 4. (Color online) Measured E-E signature in strip 7 (θ LAB between 42 • and 48 • ) of the barrel detector.Particles that are stopped in the E part of T-REX are rejected and not shown in this figure.The red events correspond to particles that are identified as deuterons based on their kinematical signature by the analysis software.Alternatively, the black dots are identified as protons. FIG. 5 . FIG. 5. (Color online) Doppler corrected energy of γ rays with respect to the original excitation energy of 67 Ni, deduced from proton kinematics.Events on the solid line correspond to direct ground-state γ transitions after the transfer reaction. FIG. 6 . FIG. 6.(a) Proton-γ -γ coincidences for the 1724-keV transition.The strongest coincidences at 483 and 1896 keV are clearly visible.(b) Corresponding incoming excitation energies in67 Ni deduced from coincident proton kinematics for 1724 keV and coincident γ rays, efficiency corrected.See text for more information. Based on the work of Zhu et al. the spin of the ground state [mb/srad] FIG. 13 . FIG.13.Distribution of neutron single-particle strength in67 Ni (experimental and shell-model calculations) and proton-single particle strength in 89 Y. See text for additional information. FIG.7.Level scheme constructed from the available (d,p) data.Gamma and level energies are given in keV.Gamma-ray intensities relative to the 3621-keV transition. TABLE II . [66]view of the optical model parameters used in the DWBA analysis taken from GOMPs in Refs.[65](incoming channel) and[66](outgoing channel). TABLE III . Available spectroscopic information of the observed excited states in was included for the observed levels.
13,421.8
2015-05-20T00:00:00.000
[ "Physics" ]
Concrete-Filled-Large Deformable FRP Tubular Columns under Axial Compressive Loading The behavior of concrete-filled fiber tubes (CFFT) polymers under axial compressive loading was investigated. Unlike the traditional fiber reinforced polymers (FRP) such as carbon, glass, aramid, etc., the FRP tubes in this study were designed using large rupture strains FRP which are made of recycled materials such as plastic bottles; hence, large rupture strain (LRS) FRP composites are environmentally friendly and can be used in the context of green construction. This study performed finite element (FE) analysis using LS-DYNA software to conduct an extensive parametric study on CFFT. The effects of the FRP confinement ratio, the unconfined concrete compressive strength (fc ), column size, and column aspect ratio on the behavior of the CFFT under axial compressive loading were investigated during this study. A comparison between the behavior of the CFFTs with LRS-FRP and those with traditional FRP (carbon and glass) with a high range of confinement ratios was conducted as well. A new hybrid FRP system combined with traditional and LRS-FRP is proposed. Generally, the CFFTs with LRS-FRP showed remarkable behavior under axial loading in strength and ultimate strain. Equations to estimate the concrete dilation parameter and dilation angle of the CFFTs with LRS-FRP tubes and hybrid FRP tubes are suggested. Introduction Green buildings are environmentally sound buildings.The ideal green project preserves and restores the habitat that is vital for sustaining life by acting as a net producer and exporter of resources, materials, energy, and water rather than being a net consumer.The Environmental Protection Agency (EPA) suggests using recycled industrial goods such as demolition debris in construction projects for green buildings.Energy efficient building materials and appliances are promoted in the United States through energy rebate programs.However, using green materials in construction is usually costly.Recently, new fiber reinforced polymer (FRP) composites have been manufactured from recycled plastic bottles.They were introduced as alternatives to traditional FRPs such as glass, aramid, and carbon FRP.The new FRP composites are much cheaper than the traditional FRPs.These new FRP composites are made of polyethylene terephthalate (PET) and polyethylene naphthalate (PEN) fibers.The traditional FRP composites have linear elastic stress-strain relationships with a rupture failure strain ranged around 1.0% to 2.5%.However, the new FRP composites have bilinear stress-strain relationships with elastic modulus and tangent modulus.This bilinear stress-strain relation is because of the effect of amorphous phase motion and macromolecular chains sliding between LRS fibers and matrix [1].However, the elastic modulus of the new FRP composites is, in general, lower than that of the traditional FRPs.They have much larger rupture strains, usually larger than 6.0%.Therefore, the new FRP composites were called large rupture strain FRPs (LRS-FRPs).PET polymers keep their mechanical strengths up to a temperature of 150-175 °C [2]. Use of the FRP in new structures has grown rapidly in the past two decades.The main purpose of using FRP is to enhance the strength and ductility of a structural member.Concrete-filled FRP tubes (CFFTs) have many benefits such as light weight-to-strength ratio, high confinement, and corrosion resistance.The FRP tube acts as a stay-in-place formwork, confines the concrete structural element, and increase its compressive strength.Several researchers investigated the behavior of CFFT columns using the traditional FRP tubes under different loadings [3][4][5][6][7][8][9][10]. Recently, some experimental works have been conducted to investigate the performance of the LRS-FRPs in jacketing concrete members to examine their behavior under different loading such as axial, flexural, shear loadings [1,[11][12][13][14][15].This research has shown that LRS-FRP jacketed concrete members had superior behavior compared to members retrofitted using conventional FRP.However, no studies were conducted to determine the benefits of combining both traditional and LRS-FRPs in a hybrid system. The FRP confinement pressure (fl) and the concrete dilation angle (ψ) are essential parameters in characterizing the performance of concrete under compression stress in the CFFTs.Confinement pressure is the lateral pressure from the FRP tube that confines the concrete core when the concrete material starts to expand.The confinement pressure and the confinement ratio can be calculated using Equations ( 1) and (2).The dilation angle is defined as the inclination of the failure surface towards the hydrostatic axis.Physically, the dilation angle is interpreted as a concrete internal friction.The dilation angle varies depending on the axial stress level and the FRP jacketing stiffness [16,17].However, previous studies used the dilation angle to vary with the FRP jacketing stiffness and to be a constant value under varied axial load levels in the finite element analysis [18][19][20].The finite element results of these studies agreed with the experimental results with reasonable accuracy.For unconfined concrete material, the dilation angle is usually taken between 36° to 40° with an average value of 38° [21][22][23]. where is the elastic modulus of the FRP tube in the confinement direction, ε is the ultimate tensile strain of the FRP in the confinement direction, is the FRP tube thickness, D is the column's diameter, and ′ the characterized concrete cylindrical strength at 28 days. An extensive finite element (FE) study is presented to investigate the behavior of the CFFTs using LRS-FRP under axial compressive loading.LS-DYNA software [24] was used during this study.A high range of confinement ratios was investigated for the traditional FRP and LRS-FRP.New state-of-the-art CFFT columns using hybrid FRP tubes combined with traditional FRP and LRS-FRP are introduced.In addition, the effect of the concrete strength ( ′ ), column size, and column aspect ratio on the behavior of the CFFT were studied.This study introduces recommendations for using of the most effective FRP type in CFFT tubes.A new equation to estimate the dilation angle for the CFFT column with LRS-FRP tubes is suggested. Finite Element Model Validation FE modeling was used to analyze the behavior of CFFT with LRS-FRP under axial loading.The LS-DYNA 971 R3 software was used to design and validate the models against the experimental results that were gathered from 12 CFFT columns with LRS-FRP by Dai et al. [13].Each column had a circular cross-section with an outer diameter of 150 mm and a height of 300 mm.These columns had a concrete compressive strength ( ′ ) that was between 32.5 and 39.2 MPa.Either PET-FRP or PEN-FRP was used to manufacture the FRP tubes (Table 1).These models were next used to conduct a parametric study investigating the differences between LRS-FRP, tradition FRP, and hybrid system of a combination of both by analyzing the effects of the confinement ratio, column's size, and the column's aspect ratio on the CFFT behavior under axial loading. Geometry The concrete cylinder and steel plates were modeled using solid elements (Figure 1).The outer FRP tube was simulated using shell elements.All solid elements were modeled with constant-stress and a one-point quadrature to reduce the computational time.Hourglass control was used to avoid spurious singular modes (i.e., hourglass modes) for solid elements.The hourglass value for all models was taken as the default value of 0.10.Surface-to-surface contact elements were used to simulate the interface between the concrete cylinder and the outer FRP tube.Node-to-surface contact elements were used between the rigid plates and the cylinder.The coefficient of friction for all of the contact elements was taken as 0.6 [25]. Concrete Material Model Different material models are available in LS-DYNA to simulate concrete materials.Because the Karagozian and Case Concrete Damage Model Release 3 (K & C model) exhibited good agreement with the experimental results collected in previous studies, it was chosen for this study (e.g., [25]).The model was developed based on the theory of plasticity.The model has three shear failure surfaces: yield, maximum, and residual [26].This material model has eighty parameters that can be either user defined or automatically generated.This study used the automatic generation option for the failure surface where ′ was the main input to the model.Another input to the model, the fractional dilation parameter (ω), considers any volumetric change in concrete.The fractional dilation parameter is related to the dilation angle by Equation (3).Youssf et al. [20] suggested an Equation (4) to calculate the dilation parameter to the CFFT with traditional FRP.Youssef et al.'s equation was modified to propose a new Equation ( 5) to calculate the dilation parameter of the CFFT with LRS-FRP based on the validation of the experimental results.In the case of a conventional concrete column without FRP confinement, the equation yields a dilation parameter of a constant value of 0.8, which is approximately equal to Tan 38°.This result agreed with the common value of the dilation angle of the concrete material without FRP confinement.The dilation parameter for the hybrid system of a combination of the LRS-FRP and the traditional FRP was equal to the summation of the two dilation parameters (Equation ( 6)). Dilation parameter (ω) = tanψ Dilation parameter in case of traditional FRP (ω 1 ) = −0.195ln where 1 is the confinement modulus ratio, 1 is the elastic modulus of the traditional FRP, 2 is the tangent modulus of the LRS-FRP, is the thickness of the FRP, D is the column's diameter, and ′ is the characteristic cylindrical concrete strength at 28 days. FRP Material Model The material properties of PET-FRP and PEN-FRP composites have been studied by Dai et al. [13].Such types of FRP have approximate bilinear stress-strain relationships that can be described in terms of two moduli of elasticity: the initial elastic modulus (Ef1) and the tangent modulus (Ef2).The material properties of PET-FRP and PEN-FRP are summarized in Table 2.The material properties of the glass and carbon FRP referenced in the manufacturer data sheet of Tyfo ® SEH-51 and Tyfo ® SCH-41 are summarized as well in Table 2. FRP composites were modeled as orthotropic materials using "108-ortho_elastic_plastic" material for LRS-FRP to simulate the bilinear behavior.Material model "002-orthotropic-elastic" was used for the traditional FRP to simulate the linear behavior.The "108-ortho_elastic_plastic" material model combines orthotropic, elastic, and plastic behaviors for shells only.This material is defined by the engineering constants: elastic modulus (Ef1), tangent modulus (Ef2), shear modulus (G), and Poisson's ratio (PR) in the two principle axes (a and b).Additionally, the fiber orientation is defined by a vector.However, the tangent modulus does not exist in the material model of "002-orthotropic-elastic". Failure criterion for FRP composites was defined using "000-add_erosion," to assign the ultimate strain of FRP in the "EFFEPS" card. Loading and Boundary Conditions Displacements and rotations in all directions were prevented at the bottom of the bottom plate.Displacements in X and Y directions were prevented for all of the nodes of the top plate.Monotonic downward (negative Z direction) displacement loading was applied on the top plate for axial compressive loading until failure occurred.Failure was defined as the rupture of the FRP or the crushing of the concrete cylinder. Validation Results Figure 2 illustrates the axial strain-axial stress relationships for all of the cylinders gathered from the FE and the experimental results.The axial strain of each cylinder was obtained by dividing the axial displacement of the loading plate by the cylinder's height of 300 mm.The axial stress of each cylinder was obtained by dividing the axial reactions at the bottom of the bottom plate by the cross-sectional area of the cylinder.All simulated columns behaved in a manner similar to the tested cylinders until failure.All of the cylinders failed by FRP rupture whether in the experimental or FE category (Figure 3).The FE's average error rates in predicting the ultimate axial stress and ultimate axial strain were 9% and 10%, respectively.The error was calculated as the absolute value of the difference between the experimental results and the FE results divided by the experimental results.The FE predicted accurately the initial stiffness and stiffness degradation of all of the cylinders until the axial stress reached the unconfined concrete cylindrical strength ( ′ ).After this stress, the FE differentiated a little in values with the experimental results until failure.This difference in values was because the dilation angle was taken as a constant value in the FE.However, it would change with the axial stress level.However, the effect of the dilation did not significantly affect the overall behavior as the accuracy in predicting the ultimate strain was 91%, and the accuracy in predicting the ultimate stress was 90%. Parametric Study The LRS-FRP is a new composite that has only recently been investigated.Once the finite element model was validated, a comprehensive parametric study was conducted to numerically investigate the behavior of the CFFTs with LRS-FRP.The behavior of the CFFT using different FRP types, including traditional FRP and LRS-FRP, was investigated.A new hybrid system of FRP composites was investigated by combining traditional FRP with LRS-FRP to confine the concrete.The influence of fiber stacking sequences was investigated by placing the PET-FRP layers in the inner surface of the FRP tube and placing the traditional FRP in the outer tube for some columns and vice versa for others.In addition, the effects of the confinement ratio, the unconfined concrete nominal compressive strength ( ′ ), the column's size, and the column's aspect ratio were investigated.All of the investigated columns had a diameter of 150 mm, a height of 300 mm, and an aspect ratio of 2 except columns C44 to C48.Four different column sizes with aspect ratios of 2 were investigated during this study.The diameters × heights ranged from 150 mm × 300 mm to 1500 mm × 3000 mm.Three different column aspect ratios ranging from 2 to 10 were investigated.Seven different confinement ratios ranging from 0.3 to 1.2 were investigated for PET-FRP, PEN-FRP, Glass-FRP, and Carbon-FRP. Five concrete cylindrical compressive strengths ( ′ ) ranging from 27.6 MPa to 82.8 MPa were examined.Each parameter was studied independently, resulting in an analysis of 49 columns.Table 3 summarizes the investigated columns' variables. LRS-FRP versus Traditional FRP The CFFTs with LRS-FRP and with traditional FRP were investigated with different confinement ratios ranging from 0.3 to 1.2. Figure 4 illustrates the typical axial strain-normalized strength behavior of the CFFTs with LRS-FRP and with traditional FRP.The normalized strength was calculated as the axial stress divided by the ′ .All of the columns failed by FRP rupture.However, the CFFTs with traditional FRP behaved, as expected, with bilinear strain-stress relationships.The CFFTs with LRS-FRP behaved with trilinear behavior.This behavior of CFFTs with LRS-FRP was because of the effect of the bilinear behavior on the LRS-FRP instead of the linear behavior in the traditional FRP.All of the columns had the same initial stiffness.The reason was that the effect of the FRP confinement did not appear until the axial stress reached to almost the ′ when the concrete volume change started to become positive (expansion; reference).The CFFTs with traditional FRP continued with the secant modulus until failure occurred.The CFFTs with LRS-FRP showed a stiffness degradation after axial strain of approximately 0.016 and 0.013 for PEN-FRP and PET-FRP, respectively.The CFFTs with LRS-FRP showed higher ultimate strain and lower secant stiffness than those with traditional FRP.As expected, the CFFTs with carbon FRP tubes showed higher secant stiffness and lower ultimate strain.The CFFTs with PET-FRP showed higher ultimate strain and lower secant stiffness.The CFFTs with LRS-FRP showed a higher strength than those with traditional FRP.The reason was the high hoop rupture strain of the LRS-FRP reached 8.7 times that of the carbon FRP and 2.9 times that of the glass FRP.The axial strength of the CFFTs with PEN-FRP and PET-FRP was almost the same.However, the axial strength of the CFFT with PET-FRP was approximately 1.25 times that of the CFFT with PEN-FRP.Carbon FRP with same confinement ratio of 0.9. Figure 5 illustrates the relation between the confinement ratio and the normalized strength and between the confinement ratio and the ultimate axial strain for all of the FRP composites.This figure illustrates the efficiency of the different types of FRP in normalized strength and ultimate axial strain at the same confinement ratio.It is very clear in the figure that the CFFTs with LRS-FRP were more efficient than those with traditional FRP.This clearly indicated the great effect high rupture strain had on the confinement.Figure 6 illustrates the axial strain-normalized strength for the CFFTs with traditional FRP, LRS-FRP, and hybrid FRP with the same confinement ratio of 1.2.Fiber stacking sequences were investigated by placing PET-FRP in the inner surface of the FRP tube and placing glass or carbon FRPs in the outer surface and vice versa.Figure 6a illustrates the PET, glass, and hybrid PET/glass where the PET was in the inner surface.In general, placing the LRS-FRP in the inner surface and the traditional FRP in the outer surface revealed a better performance than placing the traditional FRP in the inner surface.The reason for this behavior was that the rupture strain of the traditional FRP is much lower than that of the LRS-FRP.Therefore, the traditional FRP ruptured earlier than the LRS-FRP.Hence, when the traditional FRP was placed in the inner surface, the LRS-FRP was controlled by the traditional FRP rupture strain, and it ruptured directly after the rupture of the traditional FRP.However, when the traditional FRP was placed in the outer surface, the LRS-FRP was controlled by it, and it continued until ruptured with high hoop strains.Therefore, the hybrid FRP of PET/traditional FRP reached higher hoop strains than the traditional FRP.However, the hybrid FRP ruptured at a lower strain than that of the LRS-FRP alone because of the synergistic effect from hybridization.In general, the hybrid of PET/glass performed better than PET/carbon.The reason for that was the large difference in rupture strains between the PET and carbon.In general, the axial strain-normalized strength relation of the CFFTs with hybrid FRP was nonlinear instead of bilinear in the case of LRS-FRP alone.The relation was linear in the case of traditional FRP alone.The strength and ultimate axial strain of the CFFTs with hybrid FRP increased when the traditional FRP was increased.This indicated that using few layers of LRS-FRP with the traditional FRP would improve the CFFT's performance a lot.However, the difference in the confinement ratio contribution of the LRS-FRP has to be considerable in order to avoid sudden failure as in the case of (PET-I + Carbon) in Figure 5c.When the carbon FRP reached its ultimate strain (1% only), it failed and one layer of PET-FRP was not enough to continue to confine the concrete core, which led to rupture of the PET layer as well. Unconfined Concrete Compressive Strength (𝑓 𝑐 ′ ) Five columns were studied with different concrete unconfined compressive strengths ( ′ ) ranging from 27.6 MPa to 82.8 MPa. Figure 7 illustrates the axial strain-normalized strength relation of the CFFTs with different ′ .In general, changing the ′ did not affect the normalized strength or the ultimate axial strain because the columns had the same FRP confinement ratios.However, when the concrete core was high-strength ( ′ ≥ 55.2 MPa), the strength and ultimate axial strain were inversely proportional with the ′ .The lateral concrete expansion is dependent on the concrete mechanical properties.Therefore, the lateral expansion of high-strength concrete is significantly higher than that of the normal strength concrete, which reduces the effect of FRP confinement.The ultimate axial strain and the normalized strength decreased by 14.6% and 9.0%, respectively when the ′ of the high-strength concrete increased by 25% (from 55.2 MPa to 69.0 MPa).The ultimate axial strain and the normalized strength decreased by 21.1% and 24.9%, respectively when the ′ of the high-strength concrete increased by 50% (from 55.2 MPa to 82.8 MPa). Column Size Four columns with sizes ranging from 150 mm × 300 mm to 1500 mm × 3000 mm were studied.Figure 8 illustrates the axial strain-normalized strength relation of the CFFTs with different column sizes.In general, the strength was reduced when the column size was increased as the FRP confinement could not affect the whole cross-section.Figure 9 shows the axial stress distribution of all of the columns in the mid and top cross-sections.It is very clear that the FRP confinement affected a zone along the outer perimeter in the cross-section, and this zone decreased when the column diameter increased.However, the behavior of the columns with dimensions of 150 mm × 300 mm and 200 mm × 400 mm was almost the same in axial strain-normalized strength as the behavior in cross-section.This behavior was because both dimensions were considerably low for a confinement ratio of 1.2. Column Aspect Ratio Three columns with different column aspect (height-to-diameter) ratios ranging from 2 to 10 were studied.Figure 10 illustrates the axial strain-normalized strength relation of the CFFTs with different aspect ratios.The ultimate axial strain and axial strength decreased when the column's aspect ratio increased.The column with an aspect ratio of 2 failed by FRP rupture.However, the columns with aspect ratios of 5 and 10 failed by compression failure.Figure 11 illustrates the column's deformation for different aspect ratios.Figure 11a illustrates the global buckling that occurred in the column with an aspect ratio of 10, leading to compression failure.Figure 11b illustrates the deformation of the column, with an aspect ratio of 5, that bulged in the top and bottom thirds leading to compression failure.Figure 11c,d illustrate the common failure of the confined short column of FRP rupture at the middle part.The ultimate axial strain decreased by 26% and the axial strength of the CFFT with LRS-FRP decreased by 48% when the aspect ratio increased from 2 to 5. The ultimate axial strain decreased by 63% and the axial strength of the CFFT with LRS-FRP decreased by 58% when the aspect ratio increased from 2 to 10. Findings and Conclusions The behavior of the concrete-filled fiber tubes (CFFT) with new high deformable fiber reinforced polymers under axial compressive loading was investigated.Unlike the traditional fiber reinforced polymers (FRP) like carbon, glass, aramid, etc., the new FRP composites have a large rupture strain and are made with cheap materials.The large rupture strain (LRS) FRP composites are made with polyethylene naphthalate (PEN) and polyethylene terephthalate (PET) fibers.The PEN and PET fibers can be used in green buildings.They are environmentally friendly as they are made from recycled materials (e.g., bottles).They have a high ultimate strain (>5.0%), however their elastic modulus is low.This study used finite element (FE) analysis using LS-DYNA software to conduct an extensive parametric study to investigate the behavior of the CFFTs with the LRS-FRP under axial compressive loading.Forty-nine columns were investigated to determine important factors may affect the behavior of the CFFTs under axial compressive loading.A high range of FRP confinement ratios was investigated.In addition, the effects of the unconfined concrete compressive strength ( ′ ), column size, and column aspect ratio on the behavior of the CFFT were studied.A comparison between the behavior of the CFFTs with LRS-FRP and the traditional FRP (carbon and glass) with a high range of confinement ratios was conducted as well.This paper introduced a new state-of-the-art hybrid FRP to be used for the CFFT columns by investigating different combinations of traditional FRP with LRS-FRP.Generally, the CFFTs with LRS-FRP showed a remarkable behavior under axial loading in strength and ultimate strain. The LRS-FRP composites were more efficient than the traditional FRP composites in strength and ultimate strain.The behavior of the hybrid FRP with a stacking sequence of LRS/glass (inner/outer of the tube) showed much better behavior in strength than the traditional FRP or the LRS-FRP.However, this hybrid FRP showed a higher ultimate axial strain than the traditional FRP.The LRS alone was better in the ultimate axial strain.A new equation to estimate the concrete dilation parameter and dilation angle of the CFFT columns with LRS-FRP tubes or hybrid FRP tubes was suggested. In conclusion LRS-FRP is a promising family of new material; however, more research is still required to characterize their fire resistance and durability.Their behavior with different matrices and their bonding with concrete members should be investigated as well.
5,443.2
2015-10-14T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science" ]